Feb 16 20:55:03.884479 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 20:55:04.558223 master-0 kubenswrapper[4119]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:55:04.558223 master-0 kubenswrapper[4119]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 20:55:04.558223 master-0 kubenswrapper[4119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:55:04.558223 master-0 kubenswrapper[4119]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:55:04.558223 master-0 kubenswrapper[4119]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 20:55:04.558223 master-0 kubenswrapper[4119]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:55:04.559896 master-0 kubenswrapper[4119]: I0216 20:55:04.559197 4119 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 20:55:04.568069 master-0 kubenswrapper[4119]: W0216 20:55:04.568015 4119 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:55:04.568069 master-0 kubenswrapper[4119]: W0216 20:55:04.568049 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:55:04.568069 master-0 kubenswrapper[4119]: W0216 20:55:04.568063 4119 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:55:04.568069 master-0 kubenswrapper[4119]: W0216 20:55:04.568076 4119 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:55:04.568069 master-0 kubenswrapper[4119]: W0216 20:55:04.568085 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568097 4119 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568107 4119 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568117 4119 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568126 4119 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568135 4119 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568145 4119 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568156 4119 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568167 4119 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568176 4119 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568186 4119 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568195 4119 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568205 4119 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568215 4119 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568224 4119 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568237 4119 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568248 4119 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568267 4119 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568276 4119 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:55:04.568426 master-0 kubenswrapper[4119]: W0216 20:55:04.568285 4119 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568293 4119 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568301 4119 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568309 4119 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568317 4119 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568326 4119 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568335 4119 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568343 4119 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568351 4119 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568359 4119 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568368 4119 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568378 4119 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568389 4119 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568405 4119 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568415 4119 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568426 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568438 4119 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568450 4119 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568461 4119 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568473 4119 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:55:04.569352 master-0 kubenswrapper[4119]: W0216 20:55:04.568488 4119 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568500 4119 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568510 4119 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568520 4119 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568530 4119 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568541 4119 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568551 4119 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568560 4119 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568569 4119 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568579 4119 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568587 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568599 4119 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568609 4119 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568618 4119 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568628 4119 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568637 4119 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568685 4119 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568695 4119 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568704 4119 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568712 4119 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:55:04.570339 master-0 kubenswrapper[4119]: W0216 20:55:04.568720 4119 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: W0216 20:55:04.568728 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: W0216 20:55:04.568737 4119 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: W0216 20:55:04.568745 4119 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: W0216 20:55:04.568754 4119 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: W0216 20:55:04.568763 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: W0216 20:55:04.568772 4119 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: W0216 20:55:04.568781 4119 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: W0216 20:55:04.568789 4119 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569807 4119 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569830 4119 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569851 4119 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569863 4119 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569875 4119 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569884 4119 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569896 4119 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569910 4119 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569922 4119 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569934 4119 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569947 4119 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569959 4119 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569972 4119 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 20:55:04.571280 master-0 kubenswrapper[4119]: I0216 20:55:04.569983 4119 flags.go:64] FLAG: --cgroup-root="" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.569996 4119 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570007 4119 flags.go:64] FLAG: --client-ca-file="" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570019 4119 flags.go:64] FLAG: --cloud-config="" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570030 4119 flags.go:64] FLAG: --cloud-provider="" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570042 4119 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570064 4119 flags.go:64] FLAG: --cluster-domain="" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570075 4119 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570087 4119 flags.go:64] FLAG: --config-dir="" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570098 4119 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570111 4119 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570126 4119 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570139 4119 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570150 4119 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570162 4119 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570175 4119 flags.go:64] FLAG: --contention-profiling="false" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570187 4119 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570199 4119 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570211 4119 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570222 4119 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570235 4119 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570247 4119 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570259 4119 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570292 4119 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570303 4119 flags.go:64] FLAG: --enable-server="true" Feb 16 20:55:04.572253 master-0 kubenswrapper[4119]: I0216 20:55:04.570315 4119 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570337 4119 flags.go:64] FLAG: --event-burst="100" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570350 4119 flags.go:64] FLAG: --event-qps="50" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570362 4119 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570375 4119 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570387 4119 flags.go:64] FLAG: --eviction-hard="" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570400 4119 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570413 4119 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570424 4119 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570435 4119 flags.go:64] FLAG: --eviction-soft="" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570446 4119 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570458 4119 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570468 4119 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570480 4119 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570489 4119 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570498 4119 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570507 4119 flags.go:64] FLAG: --feature-gates="" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570519 4119 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570529 4119 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570538 4119 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570547 4119 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570557 4119 flags.go:64] FLAG: --healthz-port="10248" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570566 4119 flags.go:64] FLAG: --help="false" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570576 4119 flags.go:64] FLAG: --hostname-override="" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570585 4119 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570594 4119 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 20:55:04.573404 master-0 kubenswrapper[4119]: I0216 20:55:04.570603 4119 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570612 4119 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570621 4119 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570630 4119 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570639 4119 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570680 4119 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570689 4119 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570699 4119 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570724 4119 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570735 4119 flags.go:64] FLAG: --kube-reserved="" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570744 4119 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570753 4119 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570762 4119 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570771 4119 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570780 4119 flags.go:64] FLAG: --lock-file="" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570789 4119 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570800 4119 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570813 4119 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570828 4119 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570837 4119 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570846 4119 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570855 4119 flags.go:64] FLAG: --logging-format="text" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570864 4119 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570874 4119 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570883 4119 flags.go:64] FLAG: --manifest-url="" Feb 16 20:55:04.574646 master-0 kubenswrapper[4119]: I0216 20:55:04.570892 4119 flags.go:64] FLAG: --manifest-url-header="" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.570904 4119 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.570913 4119 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.570924 4119 flags.go:64] FLAG: --max-pods="110" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.570933 4119 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.570942 4119 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.570951 4119 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.570960 4119 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.570969 4119 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.570979 4119 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.570988 4119 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571008 4119 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571017 4119 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571026 4119 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571036 4119 flags.go:64] FLAG: --pod-cidr="" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571044 4119 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571065 4119 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571074 4119 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571083 4119 flags.go:64] FLAG: --pods-per-core="0" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571098 4119 flags.go:64] FLAG: --port="10250" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571107 4119 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571116 4119 flags.go:64] FLAG: --provider-id="" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571125 4119 flags.go:64] FLAG: --qos-reserved="" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571134 4119 flags.go:64] FLAG: --read-only-port="10255" Feb 16 20:55:04.575832 master-0 kubenswrapper[4119]: I0216 20:55:04.571144 4119 flags.go:64] FLAG: --register-node="true" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571156 4119 flags.go:64] FLAG: --register-schedulable="true" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571165 4119 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571179 4119 flags.go:64] FLAG: --registry-burst="10" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571188 4119 flags.go:64] FLAG: --registry-qps="5" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571197 4119 flags.go:64] FLAG: --reserved-cpus="" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571206 4119 flags.go:64] FLAG: --reserved-memory="" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571217 4119 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571225 4119 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571235 4119 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571244 4119 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571253 4119 flags.go:64] FLAG: --runonce="false" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571262 4119 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571271 4119 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571282 4119 flags.go:64] FLAG: --seccomp-default="false" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571292 4119 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571302 4119 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571312 4119 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571322 4119 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571332 4119 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571341 4119 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571351 4119 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571359 4119 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571368 4119 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571377 4119 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 20:55:04.576903 master-0 kubenswrapper[4119]: I0216 20:55:04.571387 4119 flags.go:64] FLAG: --system-cgroups="" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571395 4119 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571410 4119 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571419 4119 flags.go:64] FLAG: --tls-cert-file="" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571428 4119 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571444 4119 flags.go:64] FLAG: --tls-min-version="" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571455 4119 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571464 4119 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571479 4119 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571488 4119 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571497 4119 flags.go:64] FLAG: --v="2" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571509 4119 flags.go:64] FLAG: --version="false" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571520 4119 flags.go:64] FLAG: --vmodule="" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571530 4119 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: I0216 20:55:04.571539 4119 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: W0216 20:55:04.571782 4119 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: W0216 20:55:04.571796 4119 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: W0216 20:55:04.571806 4119 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: W0216 20:55:04.571816 4119 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: W0216 20:55:04.571825 4119 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: W0216 20:55:04.571834 4119 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: W0216 20:55:04.571845 4119 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: W0216 20:55:04.571854 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:55:04.578184 master-0 kubenswrapper[4119]: W0216 20:55:04.571863 4119 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571871 4119 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571880 4119 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571888 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571896 4119 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571904 4119 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571912 4119 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571920 4119 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571927 4119 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571936 4119 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571944 4119 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571952 4119 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571960 4119 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571968 4119 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571976 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571984 4119 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.571992 4119 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.572003 4119 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.572011 4119 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.572032 4119 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.572040 4119 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:55:04.579311 master-0 kubenswrapper[4119]: W0216 20:55:04.572048 4119 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572056 4119 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572064 4119 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572072 4119 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572083 4119 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572091 4119 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572099 4119 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572107 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572118 4119 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572127 4119 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572137 4119 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572148 4119 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572157 4119 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572165 4119 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572174 4119 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572184 4119 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572192 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572201 4119 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:55:04.580256 master-0 kubenswrapper[4119]: W0216 20:55:04.572209 4119 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572218 4119 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572226 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572234 4119 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572242 4119 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572250 4119 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572258 4119 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572269 4119 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572279 4119 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572289 4119 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572303 4119 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572314 4119 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572324 4119 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572334 4119 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572344 4119 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572354 4119 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572367 4119 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572377 4119 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572388 4119 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572401 4119 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:55:04.581125 master-0 kubenswrapper[4119]: W0216 20:55:04.572412 4119 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:55:04.582442 master-0 kubenswrapper[4119]: W0216 20:55:04.572422 4119 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:55:04.582442 master-0 kubenswrapper[4119]: W0216 20:55:04.572433 4119 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:55:04.582442 master-0 kubenswrapper[4119]: W0216 20:55:04.572444 4119 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:55:04.582442 master-0 kubenswrapper[4119]: W0216 20:55:04.572453 4119 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:55:04.582442 master-0 kubenswrapper[4119]: I0216 20:55:04.572477 4119 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:55:04.587157 master-0 kubenswrapper[4119]: I0216 20:55:04.587042 4119 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 20:55:04.587157 master-0 kubenswrapper[4119]: I0216 20:55:04.587109 4119 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587245 4119 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587261 4119 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587271 4119 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587279 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587287 4119 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587296 4119 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587303 4119 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587314 4119 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587322 4119 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587334 4119 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587349 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587358 4119 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587367 4119 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587375 4119 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587384 4119 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587392 4119 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587400 4119 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587408 4119 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587418 4119 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:55:04.587562 master-0 kubenswrapper[4119]: W0216 20:55:04.587426 4119 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587434 4119 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587442 4119 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587450 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587458 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587466 4119 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587474 4119 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587482 4119 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587490 4119 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587497 4119 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587505 4119 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587512 4119 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587520 4119 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587528 4119 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587541 4119 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587552 4119 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587560 4119 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587569 4119 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587578 4119 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587586 4119 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:55:04.588223 master-0 kubenswrapper[4119]: W0216 20:55:04.587595 4119 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587606 4119 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587616 4119 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587625 4119 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587633 4119 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587642 4119 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587681 4119 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587692 4119 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587701 4119 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587709 4119 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587720 4119 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587729 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587738 4119 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587745 4119 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587753 4119 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587761 4119 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587769 4119 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587777 4119 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587785 4119 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:55:04.588844 master-0 kubenswrapper[4119]: W0216 20:55:04.587792 4119 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587800 4119 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587808 4119 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587816 4119 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587824 4119 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587831 4119 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587839 4119 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587848 4119 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587857 4119 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587864 4119 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587874 4119 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587885 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587928 4119 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.587945 4119 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: I0216 20:55:04.587965 4119 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:55:04.589596 master-0 kubenswrapper[4119]: W0216 20:55:04.588231 4119 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588249 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588261 4119 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588270 4119 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588278 4119 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588286 4119 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588297 4119 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588310 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588320 4119 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588330 4119 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588339 4119 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588349 4119 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588359 4119 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588368 4119 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588379 4119 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588387 4119 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588397 4119 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588407 4119 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588416 4119 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:55:04.590210 master-0 kubenswrapper[4119]: W0216 20:55:04.588427 4119 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588435 4119 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588443 4119 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588451 4119 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588459 4119 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588466 4119 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588475 4119 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588483 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588490 4119 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588498 4119 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588506 4119 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588513 4119 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588521 4119 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588528 4119 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588537 4119 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588547 4119 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588554 4119 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588562 4119 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588570 4119 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588578 4119 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:55:04.591412 master-0 kubenswrapper[4119]: W0216 20:55:04.588586 4119 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588594 4119 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588601 4119 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588609 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588616 4119 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588625 4119 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588634 4119 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588642 4119 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588681 4119 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588690 4119 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588697 4119 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588705 4119 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588713 4119 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588721 4119 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588728 4119 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588736 4119 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588746 4119 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588757 4119 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588766 4119 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588774 4119 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:55:04.592029 master-0 kubenswrapper[4119]: W0216 20:55:04.588785 4119 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588793 4119 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588801 4119 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588809 4119 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588817 4119 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588825 4119 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588833 4119 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588840 4119 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588849 4119 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588857 4119 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588869 4119 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588878 4119 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: W0216 20:55:04.588886 4119 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: I0216 20:55:04.588899 4119 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:55:04.592596 master-0 kubenswrapper[4119]: I0216 20:55:04.590451 4119 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 20:55:04.594122 master-0 kubenswrapper[4119]: I0216 20:55:04.594076 4119 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 16 20:55:04.596272 master-0 kubenswrapper[4119]: I0216 20:55:04.596217 4119 server.go:997] "Starting client certificate rotation" Feb 16 20:55:04.596272 master-0 kubenswrapper[4119]: I0216 20:55:04.596269 4119 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 20:55:04.596580 master-0 kubenswrapper[4119]: I0216 20:55:04.596521 4119 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 20:55:04.627804 master-0 kubenswrapper[4119]: I0216 20:55:04.627621 4119 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 20:55:04.637434 master-0 kubenswrapper[4119]: I0216 20:55:04.637367 4119 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 20:55:04.638732 master-0 kubenswrapper[4119]: E0216 20:55:04.638600 4119 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:04.661599 master-0 kubenswrapper[4119]: I0216 20:55:04.661509 4119 log.go:25] "Validated CRI v1 runtime API" Feb 16 20:55:04.668527 master-0 kubenswrapper[4119]: I0216 20:55:04.668477 4119 log.go:25] "Validated CRI v1 image API" Feb 16 20:55:04.672228 master-0 kubenswrapper[4119]: I0216 20:55:04.672172 4119 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 20:55:04.679377 master-0 kubenswrapper[4119]: I0216 20:55:04.679309 4119 fs.go:135] Filesystem UUIDs: map[3d9a04b0-92fb-4350-a5ea-d38e1e45e06e:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 20:55:04.679451 master-0 kubenswrapper[4119]: I0216 20:55:04.679365 4119 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 16 20:55:04.701790 master-0 kubenswrapper[4119]: I0216 20:55:04.701329 4119 manager.go:217] Machine: {Timestamp:2026-02-16 20:55:04.699261714 +0000 UTC m=+0.629187752 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:5734463887a64126b7ce9bf415a88e99 SystemUUID:57344638-87a6-4126-b7ce-9bf415a88e99 BootID:547c3926-fc12-480e-89e3-8f59492f672a Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:ff:a7:37 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:31:16:05 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:d2:11:f7:5e:7f:07 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 20:55:04.701790 master-0 kubenswrapper[4119]: I0216 20:55:04.701587 4119 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 20:55:04.702270 master-0 kubenswrapper[4119]: I0216 20:55:04.701994 4119 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 20:55:04.703926 master-0 kubenswrapper[4119]: I0216 20:55:04.703857 4119 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 20:55:04.704149 master-0 kubenswrapper[4119]: I0216 20:55:04.704076 4119 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 20:55:04.704399 master-0 kubenswrapper[4119]: I0216 20:55:04.704130 4119 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 20:55:04.704496 master-0 kubenswrapper[4119]: I0216 20:55:04.704402 4119 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 20:55:04.704496 master-0 kubenswrapper[4119]: I0216 20:55:04.704417 4119 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 20:55:04.705072 master-0 kubenswrapper[4119]: I0216 20:55:04.705022 4119 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 20:55:04.705072 master-0 kubenswrapper[4119]: I0216 20:55:04.705065 4119 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 20:55:04.705474 master-0 kubenswrapper[4119]: I0216 20:55:04.705422 4119 state_mem.go:36] "Initialized new in-memory state store" Feb 16 20:55:04.706166 master-0 kubenswrapper[4119]: I0216 20:55:04.705887 4119 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 20:55:04.713409 master-0 kubenswrapper[4119]: I0216 20:55:04.713313 4119 kubelet.go:418] "Attempting to sync node with API server" Feb 16 20:55:04.713409 master-0 kubenswrapper[4119]: I0216 20:55:04.713429 4119 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 20:55:04.713843 master-0 kubenswrapper[4119]: I0216 20:55:04.713480 4119 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 20:55:04.713843 master-0 kubenswrapper[4119]: I0216 20:55:04.713502 4119 kubelet.go:324] "Adding apiserver pod source" Feb 16 20:55:04.713843 master-0 kubenswrapper[4119]: I0216 20:55:04.713520 4119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 20:55:04.718563 master-0 kubenswrapper[4119]: I0216 20:55:04.718507 4119 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 20:55:04.720138 master-0 kubenswrapper[4119]: W0216 20:55:04.720035 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:04.720190 master-0 kubenswrapper[4119]: W0216 20:55:04.720087 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:04.720190 master-0 kubenswrapper[4119]: E0216 20:55:04.720169 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:04.720264 master-0 kubenswrapper[4119]: E0216 20:55:04.720222 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:04.721425 master-0 kubenswrapper[4119]: I0216 20:55:04.721376 4119 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 20:55:04.721840 master-0 kubenswrapper[4119]: I0216 20:55:04.721804 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 20:55:04.721887 master-0 kubenswrapper[4119]: I0216 20:55:04.721845 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 20:55:04.721887 master-0 kubenswrapper[4119]: I0216 20:55:04.721862 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 20:55:04.721887 master-0 kubenswrapper[4119]: I0216 20:55:04.721878 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 20:55:04.721999 master-0 kubenswrapper[4119]: I0216 20:55:04.721897 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 20:55:04.721999 master-0 kubenswrapper[4119]: I0216 20:55:04.721912 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 20:55:04.721999 master-0 kubenswrapper[4119]: I0216 20:55:04.721926 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 20:55:04.721999 master-0 kubenswrapper[4119]: I0216 20:55:04.721939 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 20:55:04.721999 master-0 kubenswrapper[4119]: I0216 20:55:04.721954 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 20:55:04.721999 master-0 kubenswrapper[4119]: I0216 20:55:04.721967 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 20:55:04.721999 master-0 kubenswrapper[4119]: I0216 20:55:04.721984 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 20:55:04.722214 master-0 kubenswrapper[4119]: I0216 20:55:04.722119 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 20:55:04.723313 master-0 kubenswrapper[4119]: I0216 20:55:04.723267 4119 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 20:55:04.724139 master-0 kubenswrapper[4119]: I0216 20:55:04.724098 4119 server.go:1280] "Started kubelet" Feb 16 20:55:04.725601 master-0 kubenswrapper[4119]: I0216 20:55:04.725453 4119 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 20:55:04.725601 master-0 kubenswrapper[4119]: I0216 20:55:04.725564 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:04.725702 master-0 kubenswrapper[4119]: I0216 20:55:04.725491 4119 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 20:55:04.725754 master-0 kubenswrapper[4119]: I0216 20:55:04.725732 4119 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 20:55:04.726169 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 20:55:04.727032 master-0 kubenswrapper[4119]: I0216 20:55:04.726402 4119 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 20:55:04.727032 master-0 kubenswrapper[4119]: I0216 20:55:04.726834 4119 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 20:55:04.727032 master-0 kubenswrapper[4119]: I0216 20:55:04.726859 4119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 20:55:04.727198 master-0 kubenswrapper[4119]: E0216 20:55:04.727140 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:04.727262 master-0 kubenswrapper[4119]: I0216 20:55:04.727229 4119 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 20:55:04.727311 master-0 kubenswrapper[4119]: I0216 20:55:04.727262 4119 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 20:55:04.727392 master-0 kubenswrapper[4119]: I0216 20:55:04.727344 4119 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 20:55:04.727536 master-0 kubenswrapper[4119]: I0216 20:55:04.727472 4119 reconstruct.go:97] "Volume reconstruction finished" Feb 16 20:55:04.727536 master-0 kubenswrapper[4119]: I0216 20:55:04.727513 4119 reconciler.go:26] "Reconciler: start to sync state" Feb 16 20:55:04.728409 master-0 kubenswrapper[4119]: E0216 20:55:04.728265 4119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 20:55:04.728409 master-0 kubenswrapper[4119]: W0216 20:55:04.728329 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:04.728502 master-0 kubenswrapper[4119]: E0216 20:55:04.728442 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:04.730390 master-0 kubenswrapper[4119]: I0216 20:55:04.730350 4119 factory.go:55] Registering systemd factory Feb 16 20:55:04.730390 master-0 kubenswrapper[4119]: I0216 20:55:04.730381 4119 factory.go:221] Registration of the systemd container factory successfully Feb 16 20:55:04.730862 master-0 kubenswrapper[4119]: I0216 20:55:04.730814 4119 factory.go:153] Registering CRI-O factory Feb 16 20:55:04.730862 master-0 kubenswrapper[4119]: I0216 20:55:04.730861 4119 factory.go:221] Registration of the crio container factory successfully Feb 16 20:55:04.730948 master-0 kubenswrapper[4119]: I0216 20:55:04.730828 4119 server.go:449] "Adding debug handlers to kubelet server" Feb 16 20:55:04.731003 master-0 kubenswrapper[4119]: I0216 20:55:04.730971 4119 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 20:55:04.731045 master-0 kubenswrapper[4119]: I0216 20:55:04.731023 4119 factory.go:103] Registering Raw factory Feb 16 20:55:04.731091 master-0 kubenswrapper[4119]: I0216 20:55:04.731055 4119 manager.go:1196] Started watching for new ooms in manager Feb 16 20:55:04.736461 master-0 kubenswrapper[4119]: E0216 20:55:04.730134 4119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1894d581497fb31c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.724050716 +0000 UTC m=+0.653976774,LastTimestamp:2026-02-16 20:55:04.724050716 +0000 UTC m=+0.653976774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:04.737420 master-0 kubenswrapper[4119]: I0216 20:55:04.736749 4119 manager.go:319] Starting recovery of all containers Feb 16 20:55:04.742314 master-0 kubenswrapper[4119]: E0216 20:55:04.742254 4119 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 16 20:55:04.767811 master-0 kubenswrapper[4119]: I0216 20:55:04.767760 4119 manager.go:324] Recovery completed Feb 16 20:55:04.779406 master-0 kubenswrapper[4119]: I0216 20:55:04.779340 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.781641 master-0 kubenswrapper[4119]: I0216 20:55:04.781592 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.781754 master-0 kubenswrapper[4119]: I0216 20:55:04.781744 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.781850 master-0 kubenswrapper[4119]: I0216 20:55:04.781840 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:04.783049 master-0 kubenswrapper[4119]: I0216 20:55:04.783028 4119 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 20:55:04.783105 master-0 kubenswrapper[4119]: I0216 20:55:04.783049 4119 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 20:55:04.783105 master-0 kubenswrapper[4119]: I0216 20:55:04.783075 4119 state_mem.go:36] "Initialized new in-memory state store" Feb 16 20:55:04.788783 master-0 kubenswrapper[4119]: I0216 20:55:04.788756 4119 policy_none.go:49] "None policy: Start" Feb 16 20:55:04.789541 master-0 kubenswrapper[4119]: I0216 20:55:04.789525 4119 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 20:55:04.789599 master-0 kubenswrapper[4119]: I0216 20:55:04.789551 4119 state_mem.go:35] "Initializing new in-memory state store" Feb 16 20:55:04.827896 master-0 kubenswrapper[4119]: E0216 20:55:04.827742 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.863931 4119 manager.go:334] "Starting Device Plugin manager" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.864405 4119 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.864424 4119 server.go:79] "Starting device plugin registration server" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.865129 4119 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.865143 4119 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.865390 4119 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.865595 4119 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.865621 4119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: E0216 20:55:04.867587 4119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.871603 4119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.873879 4119 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.873961 4119 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: I0216 20:55:04.873997 4119 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: E0216 20:55:04.874102 4119 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: W0216 20:55:04.875211 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:04.877606 master-0 kubenswrapper[4119]: E0216 20:55:04.875277 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:04.930042 master-0 kubenswrapper[4119]: E0216 20:55:04.929945 4119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 20:55:04.966311 master-0 kubenswrapper[4119]: I0216 20:55:04.966085 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.967988 master-0 kubenswrapper[4119]: I0216 20:55:04.967927 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.967988 master-0 kubenswrapper[4119]: I0216 20:55:04.967980 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.967988 master-0 kubenswrapper[4119]: I0216 20:55:04.967992 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:04.968202 master-0 kubenswrapper[4119]: I0216 20:55:04.968064 4119 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 20:55:04.969355 master-0 kubenswrapper[4119]: E0216 20:55:04.969296 4119 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 20:55:04.974370 master-0 kubenswrapper[4119]: I0216 20:55:04.974316 4119 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 20:55:04.974488 master-0 kubenswrapper[4119]: I0216 20:55:04.974396 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.975964 master-0 kubenswrapper[4119]: I0216 20:55:04.975894 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.976053 master-0 kubenswrapper[4119]: I0216 20:55:04.975983 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.976053 master-0 kubenswrapper[4119]: I0216 20:55:04.976019 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:04.976359 master-0 kubenswrapper[4119]: I0216 20:55:04.976330 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.976588 master-0 kubenswrapper[4119]: I0216 20:55:04.976558 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:55:04.976641 master-0 kubenswrapper[4119]: I0216 20:55:04.976600 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.977602 master-0 kubenswrapper[4119]: I0216 20:55:04.977550 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.977602 master-0 kubenswrapper[4119]: I0216 20:55:04.977591 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.977602 master-0 kubenswrapper[4119]: I0216 20:55:04.977603 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:04.978162 master-0 kubenswrapper[4119]: I0216 20:55:04.978045 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.978265 master-0 kubenswrapper[4119]: I0216 20:55:04.978228 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.978335 master-0 kubenswrapper[4119]: I0216 20:55:04.978265 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:04.978478 master-0 kubenswrapper[4119]: I0216 20:55:04.978432 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.978685 master-0 kubenswrapper[4119]: I0216 20:55:04.978623 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:55:04.979783 master-0 kubenswrapper[4119]: I0216 20:55:04.979679 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.981461 master-0 kubenswrapper[4119]: I0216 20:55:04.981171 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.981461 master-0 kubenswrapper[4119]: I0216 20:55:04.981202 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.981461 master-0 kubenswrapper[4119]: I0216 20:55:04.981231 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:04.981461 master-0 kubenswrapper[4119]: I0216 20:55:04.981308 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.981461 master-0 kubenswrapper[4119]: I0216 20:55:04.981357 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.981461 master-0 kubenswrapper[4119]: I0216 20:55:04.981375 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:04.982894 master-0 kubenswrapper[4119]: I0216 20:55:04.982621 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.982894 master-0 kubenswrapper[4119]: I0216 20:55:04.982678 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:04.982894 master-0 kubenswrapper[4119]: I0216 20:55:04.982736 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.985032 master-0 kubenswrapper[4119]: I0216 20:55:04.984884 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.985032 master-0 kubenswrapper[4119]: I0216 20:55:04.984953 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.985032 master-0 kubenswrapper[4119]: I0216 20:55:04.984968 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:04.985032 master-0 kubenswrapper[4119]: I0216 20:55:04.985005 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.985596 master-0 kubenswrapper[4119]: I0216 20:55:04.985067 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.985596 master-0 kubenswrapper[4119]: I0216 20:55:04.985087 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:04.985596 master-0 kubenswrapper[4119]: I0216 20:55:04.985142 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.985596 master-0 kubenswrapper[4119]: I0216 20:55:04.985249 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:04.985596 master-0 kubenswrapper[4119]: I0216 20:55:04.985290 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.986139 master-0 kubenswrapper[4119]: I0216 20:55:04.986082 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.986139 master-0 kubenswrapper[4119]: I0216 20:55:04.986125 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.986139 master-0 kubenswrapper[4119]: I0216 20:55:04.986141 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:04.986372 master-0 kubenswrapper[4119]: I0216 20:55:04.986299 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:55:04.986372 master-0 kubenswrapper[4119]: I0216 20:55:04.986312 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.986372 master-0 kubenswrapper[4119]: I0216 20:55:04.986358 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.986372 master-0 kubenswrapper[4119]: I0216 20:55:04.986378 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:04.986733 master-0 kubenswrapper[4119]: I0216 20:55:04.986329 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:04.987331 master-0 kubenswrapper[4119]: I0216 20:55:04.987270 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:04.987331 master-0 kubenswrapper[4119]: I0216 20:55:04.987307 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:04.987331 master-0 kubenswrapper[4119]: I0216 20:55:04.987318 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:05.027964 master-0 kubenswrapper[4119]: I0216 20:55:05.027850 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:55:05.128128 master-0 kubenswrapper[4119]: I0216 20:55:05.128043 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:55:05.128285 master-0 kubenswrapper[4119]: I0216 20:55:05.128196 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.128285 master-0 kubenswrapper[4119]: I0216 20:55:05.128273 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.128378 master-0 kubenswrapper[4119]: I0216 20:55:05.128357 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:55:05.128422 master-0 kubenswrapper[4119]: I0216 20:55:05.128393 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:55:05.128469 master-0 kubenswrapper[4119]: I0216 20:55:05.128428 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.128469 master-0 kubenswrapper[4119]: I0216 20:55:05.128456 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.128715 master-0 kubenswrapper[4119]: I0216 20:55:05.128614 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:55:05.128785 master-0 kubenswrapper[4119]: I0216 20:55:05.128623 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.128840 master-0 kubenswrapper[4119]: I0216 20:55:05.128791 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.128840 master-0 kubenswrapper[4119]: I0216 20:55:05.128822 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.128919 master-0 kubenswrapper[4119]: I0216 20:55:05.128854 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:55:05.128919 master-0 kubenswrapper[4119]: I0216 20:55:05.128881 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.128992 master-0 kubenswrapper[4119]: I0216 20:55:05.128919 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.128992 master-0 kubenswrapper[4119]: I0216 20:55:05.128963 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.129070 master-0 kubenswrapper[4119]: I0216 20:55:05.128995 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.129070 master-0 kubenswrapper[4119]: I0216 20:55:05.129024 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:55:05.129070 master-0 kubenswrapper[4119]: I0216 20:55:05.129048 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:55:05.170506 master-0 kubenswrapper[4119]: I0216 20:55:05.170384 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:05.171770 master-0 kubenswrapper[4119]: I0216 20:55:05.171719 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:05.171770 master-0 kubenswrapper[4119]: I0216 20:55:05.171779 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:05.171991 master-0 kubenswrapper[4119]: I0216 20:55:05.171791 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:05.171991 master-0 kubenswrapper[4119]: I0216 20:55:05.171843 4119 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 20:55:05.173155 master-0 kubenswrapper[4119]: E0216 20:55:05.173063 4119 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 20:55:05.229955 master-0 kubenswrapper[4119]: I0216 20:55:05.229858 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:55:05.230105 master-0 kubenswrapper[4119]: I0216 20:55:05.229984 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:55:05.230178 master-0 kubenswrapper[4119]: I0216 20:55:05.230113 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:55:05.230178 master-0 kubenswrapper[4119]: I0216 20:55:05.230177 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.230331 master-0 kubenswrapper[4119]: I0216 20:55:05.230218 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.230331 master-0 kubenswrapper[4119]: I0216 20:55:05.230241 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.230331 master-0 kubenswrapper[4119]: I0216 20:55:05.230256 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:55:05.230331 master-0 kubenswrapper[4119]: I0216 20:55:05.230292 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.230331 master-0 kubenswrapper[4119]: I0216 20:55:05.230269 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.230617 master-0 kubenswrapper[4119]: I0216 20:55:05.230366 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.230617 master-0 kubenswrapper[4119]: I0216 20:55:05.230394 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.230617 master-0 kubenswrapper[4119]: I0216 20:55:05.230367 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.230617 master-0 kubenswrapper[4119]: I0216 20:55:05.230444 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.230617 master-0 kubenswrapper[4119]: I0216 20:55:05.230473 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.230617 master-0 kubenswrapper[4119]: I0216 20:55:05.230492 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:55:05.230617 master-0 kubenswrapper[4119]: I0216 20:55:05.230537 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.230617 master-0 kubenswrapper[4119]: I0216 20:55:05.230566 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.230617 master-0 kubenswrapper[4119]: I0216 20:55:05.230604 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230637 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230687 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230718 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230731 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230574 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230767 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230804 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230826 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230829 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230886 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230924 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230927 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230954 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.231212 master-0 kubenswrapper[4119]: I0216 20:55:05.230994 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.307936 master-0 kubenswrapper[4119]: I0216 20:55:05.307838 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:55:05.318507 master-0 kubenswrapper[4119]: I0216 20:55:05.318452 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:05.331758 master-0 kubenswrapper[4119]: E0216 20:55:05.331638 4119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 20:55:05.347897 master-0 kubenswrapper[4119]: I0216 20:55:05.347744 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:55:05.378905 master-0 kubenswrapper[4119]: I0216 20:55:05.378797 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:05.387880 master-0 kubenswrapper[4119]: I0216 20:55:05.387831 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:55:05.574292 master-0 kubenswrapper[4119]: I0216 20:55:05.574179 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:05.576057 master-0 kubenswrapper[4119]: I0216 20:55:05.575997 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:05.576195 master-0 kubenswrapper[4119]: I0216 20:55:05.576061 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:05.576195 master-0 kubenswrapper[4119]: I0216 20:55:05.576084 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:05.576362 master-0 kubenswrapper[4119]: I0216 20:55:05.576204 4119 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 20:55:05.577464 master-0 kubenswrapper[4119]: E0216 20:55:05.577389 4119 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 20:55:05.605317 master-0 kubenswrapper[4119]: W0216 20:55:05.605099 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:05.605317 master-0 kubenswrapper[4119]: E0216 20:55:05.605257 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:05.728312 master-0 kubenswrapper[4119]: I0216 20:55:05.728234 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:05.799977 master-0 kubenswrapper[4119]: W0216 20:55:05.799858 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:05.800198 master-0 kubenswrapper[4119]: E0216 20:55:05.799976 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:05.941939 master-0 kubenswrapper[4119]: W0216 20:55:05.941849 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400a178a4d5e9a88ba5bbbd1da2ad15e.slice/crio-ee74f85cd24cd54b2a4b43b0584cf795c92f05590ca9093c69737b765e2c01d8 WatchSource:0}: Error finding container ee74f85cd24cd54b2a4b43b0584cf795c92f05590ca9093c69737b765e2c01d8: Status 404 returned error can't find the container with id ee74f85cd24cd54b2a4b43b0584cf795c92f05590ca9093c69737b765e2c01d8 Feb 16 20:55:05.945985 master-0 kubenswrapper[4119]: W0216 20:55:05.945891 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9460ca0802075a8a6a10d7b3e6052c4d.slice/crio-392d856d7fe28dd19573efbe9000d6ecfa05d7a1577bf8dec97ef5ca7366c7d8 WatchSource:0}: Error finding container 392d856d7fe28dd19573efbe9000d6ecfa05d7a1577bf8dec97ef5ca7366c7d8: Status 404 returned error can't find the container with id 392d856d7fe28dd19573efbe9000d6ecfa05d7a1577bf8dec97ef5ca7366c7d8 Feb 16 20:55:05.948747 master-0 kubenswrapper[4119]: I0216 20:55:05.948579 4119 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 20:55:05.981918 master-0 kubenswrapper[4119]: W0216 20:55:05.981852 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80420f2e7c3cdda71f7d0d6ccbe6f9f3.slice/crio-76dbaddee4470107b39590128f61476392182af8f7359d5ef8d2efc6c99ae59e WatchSource:0}: Error finding container 76dbaddee4470107b39590128f61476392182af8f7359d5ef8d2efc6c99ae59e: Status 404 returned error can't find the container with id 76dbaddee4470107b39590128f61476392182af8f7359d5ef8d2efc6c99ae59e Feb 16 20:55:06.041941 master-0 kubenswrapper[4119]: W0216 20:55:06.041848 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d1e91e5a1fed5cf7076a92d2830d36f.slice/crio-91a4c15bb67084035c73bb065892be1c9d73ba9204c94c99f7433a6c3008aaff WatchSource:0}: Error finding container 91a4c15bb67084035c73bb065892be1c9d73ba9204c94c99f7433a6c3008aaff: Status 404 returned error can't find the container with id 91a4c15bb67084035c73bb065892be1c9d73ba9204c94c99f7433a6c3008aaff Feb 16 20:55:06.093633 master-0 kubenswrapper[4119]: W0216 20:55:06.093512 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:06.093847 master-0 kubenswrapper[4119]: E0216 20:55:06.093697 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:06.096532 master-0 kubenswrapper[4119]: W0216 20:55:06.096450 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3322fd3717f4aec0d8f54ec7862c07e.slice/crio-f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a WatchSource:0}: Error finding container f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a: Status 404 returned error can't find the container with id f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a Feb 16 20:55:06.134168 master-0 kubenswrapper[4119]: E0216 20:55:06.134046 4119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 20:55:06.250119 master-0 kubenswrapper[4119]: W0216 20:55:06.249985 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:06.250119 master-0 kubenswrapper[4119]: E0216 20:55:06.250102 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:06.378323 master-0 kubenswrapper[4119]: I0216 20:55:06.378209 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:06.381628 master-0 kubenswrapper[4119]: I0216 20:55:06.381525 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:06.381628 master-0 kubenswrapper[4119]: I0216 20:55:06.381593 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:06.381628 master-0 kubenswrapper[4119]: I0216 20:55:06.381609 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:06.382109 master-0 kubenswrapper[4119]: I0216 20:55:06.381714 4119 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 20:55:06.382537 master-0 kubenswrapper[4119]: E0216 20:55:06.382461 4119 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 20:55:06.727623 master-0 kubenswrapper[4119]: I0216 20:55:06.727394 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:06.802916 master-0 kubenswrapper[4119]: I0216 20:55:06.802838 4119 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 20:55:06.804161 master-0 kubenswrapper[4119]: E0216 20:55:06.804128 4119 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:06.881884 master-0 kubenswrapper[4119]: I0216 20:55:06.881785 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a"} Feb 16 20:55:06.883244 master-0 kubenswrapper[4119]: I0216 20:55:06.883029 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"91a4c15bb67084035c73bb065892be1c9d73ba9204c94c99f7433a6c3008aaff"} Feb 16 20:55:06.884030 master-0 kubenswrapper[4119]: I0216 20:55:06.883971 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"76dbaddee4470107b39590128f61476392182af8f7359d5ef8d2efc6c99ae59e"} Feb 16 20:55:06.885089 master-0 kubenswrapper[4119]: I0216 20:55:06.885049 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"392d856d7fe28dd19573efbe9000d6ecfa05d7a1577bf8dec97ef5ca7366c7d8"} Feb 16 20:55:06.886376 master-0 kubenswrapper[4119]: I0216 20:55:06.886333 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"ee74f85cd24cd54b2a4b43b0584cf795c92f05590ca9093c69737b765e2c01d8"} Feb 16 20:55:07.538181 master-0 kubenswrapper[4119]: E0216 20:55:07.537931 4119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1894d581497fb31c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.724050716 +0000 UTC m=+0.653976774,LastTimestamp:2026-02-16 20:55:04.724050716 +0000 UTC m=+0.653976774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:07.727560 master-0 kubenswrapper[4119]: I0216 20:55:07.727484 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:07.735382 master-0 kubenswrapper[4119]: E0216 20:55:07.735317 4119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 20:55:07.754236 master-0 kubenswrapper[4119]: W0216 20:55:07.754187 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:07.754321 master-0 kubenswrapper[4119]: E0216 20:55:07.754244 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:07.846137 master-0 kubenswrapper[4119]: W0216 20:55:07.846024 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:07.846137 master-0 kubenswrapper[4119]: E0216 20:55:07.846100 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:07.983732 master-0 kubenswrapper[4119]: I0216 20:55:07.983633 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:07.986091 master-0 kubenswrapper[4119]: I0216 20:55:07.986033 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:07.986091 master-0 kubenswrapper[4119]: I0216 20:55:07.986076 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:07.986091 master-0 kubenswrapper[4119]: I0216 20:55:07.986086 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:07.986369 master-0 kubenswrapper[4119]: I0216 20:55:07.986137 4119 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 20:55:07.987419 master-0 kubenswrapper[4119]: E0216 20:55:07.987350 4119 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 20:55:08.027464 master-0 kubenswrapper[4119]: W0216 20:55:08.027386 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:08.027464 master-0 kubenswrapper[4119]: E0216 20:55:08.027438 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:08.727390 master-0 kubenswrapper[4119]: I0216 20:55:08.727000 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:08.895032 master-0 kubenswrapper[4119]: I0216 20:55:08.894961 4119 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="1c9bfe3aaee57fe250198f3484327052043637146bacc2e7c8dfb22afd3d4c6c" exitCode=0 Feb 16 20:55:08.895032 master-0 kubenswrapper[4119]: I0216 20:55:08.895011 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"1c9bfe3aaee57fe250198f3484327052043637146bacc2e7c8dfb22afd3d4c6c"} Feb 16 20:55:08.895535 master-0 kubenswrapper[4119]: I0216 20:55:08.895067 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:08.895956 master-0 kubenswrapper[4119]: I0216 20:55:08.895913 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:08.895996 master-0 kubenswrapper[4119]: I0216 20:55:08.895970 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:08.895996 master-0 kubenswrapper[4119]: I0216 20:55:08.895988 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:08.931668 master-0 kubenswrapper[4119]: W0216 20:55:08.931568 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:08.931830 master-0 kubenswrapper[4119]: E0216 20:55:08.931686 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:09.727861 master-0 kubenswrapper[4119]: I0216 20:55:09.727752 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:09.900260 master-0 kubenswrapper[4119]: I0216 20:55:09.900085 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204"} Feb 16 20:55:09.900260 master-0 kubenswrapper[4119]: I0216 20:55:09.900153 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e"} Feb 16 20:55:09.900260 master-0 kubenswrapper[4119]: I0216 20:55:09.900259 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:09.901300 master-0 kubenswrapper[4119]: I0216 20:55:09.901257 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:09.901368 master-0 kubenswrapper[4119]: I0216 20:55:09.901311 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:09.901368 master-0 kubenswrapper[4119]: I0216 20:55:09.901327 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:09.902803 master-0 kubenswrapper[4119]: I0216 20:55:09.902753 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/0.log" Feb 16 20:55:09.903370 master-0 kubenswrapper[4119]: I0216 20:55:09.903329 4119 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="006d151945683b86d2bb13f208ab0ac1bcbe5e721d11debe1660ca5375163137" exitCode=1 Feb 16 20:55:09.903427 master-0 kubenswrapper[4119]: I0216 20:55:09.903382 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"006d151945683b86d2bb13f208ab0ac1bcbe5e721d11debe1660ca5375163137"} Feb 16 20:55:09.903506 master-0 kubenswrapper[4119]: I0216 20:55:09.903477 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:09.904679 master-0 kubenswrapper[4119]: I0216 20:55:09.904632 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:09.904739 master-0 kubenswrapper[4119]: I0216 20:55:09.904690 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:09.904739 master-0 kubenswrapper[4119]: I0216 20:55:09.904702 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:09.905069 master-0 kubenswrapper[4119]: I0216 20:55:09.905038 4119 scope.go:117] "RemoveContainer" containerID="006d151945683b86d2bb13f208ab0ac1bcbe5e721d11debe1660ca5375163137" Feb 16 20:55:10.728169 master-0 kubenswrapper[4119]: I0216 20:55:10.728053 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:10.907080 master-0 kubenswrapper[4119]: I0216 20:55:10.906992 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 16 20:55:10.908084 master-0 kubenswrapper[4119]: I0216 20:55:10.907722 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/0.log" Feb 16 20:55:10.908480 master-0 kubenswrapper[4119]: I0216 20:55:10.908393 4119 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="fa6acd923eb6f7f36904bc4aad9d4f575490f5bfc07409635501cc3f249e2be8" exitCode=1 Feb 16 20:55:10.908480 master-0 kubenswrapper[4119]: I0216 20:55:10.908471 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:10.908712 master-0 kubenswrapper[4119]: I0216 20:55:10.908484 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"fa6acd923eb6f7f36904bc4aad9d4f575490f5bfc07409635501cc3f249e2be8"} Feb 16 20:55:10.908712 master-0 kubenswrapper[4119]: I0216 20:55:10.908570 4119 scope.go:117] "RemoveContainer" containerID="006d151945683b86d2bb13f208ab0ac1bcbe5e721d11debe1660ca5375163137" Feb 16 20:55:10.908712 master-0 kubenswrapper[4119]: I0216 20:55:10.908631 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:10.909861 master-0 kubenswrapper[4119]: I0216 20:55:10.909811 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:10.909861 master-0 kubenswrapper[4119]: I0216 20:55:10.909845 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:10.909861 master-0 kubenswrapper[4119]: I0216 20:55:10.909863 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:10.909861 master-0 kubenswrapper[4119]: I0216 20:55:10.909878 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:10.911202 master-0 kubenswrapper[4119]: I0216 20:55:10.909866 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:10.911202 master-0 kubenswrapper[4119]: I0216 20:55:10.909910 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:10.911202 master-0 kubenswrapper[4119]: I0216 20:55:10.910496 4119 scope.go:117] "RemoveContainer" containerID="fa6acd923eb6f7f36904bc4aad9d4f575490f5bfc07409635501cc3f249e2be8" Feb 16 20:55:10.911202 master-0 kubenswrapper[4119]: E0216 20:55:10.910784 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 20:55:10.936912 master-0 kubenswrapper[4119]: E0216 20:55:10.936809 4119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 16 20:55:10.970105 master-0 kubenswrapper[4119]: I0216 20:55:10.970012 4119 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 20:55:10.971579 master-0 kubenswrapper[4119]: E0216 20:55:10.971507 4119 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:11.187922 master-0 kubenswrapper[4119]: I0216 20:55:11.187818 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:11.189225 master-0 kubenswrapper[4119]: I0216 20:55:11.189181 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:11.189225 master-0 kubenswrapper[4119]: I0216 20:55:11.189221 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:11.189412 master-0 kubenswrapper[4119]: I0216 20:55:11.189237 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:11.189412 master-0 kubenswrapper[4119]: I0216 20:55:11.189326 4119 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 20:55:11.190605 master-0 kubenswrapper[4119]: E0216 20:55:11.190522 4119 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 20:55:11.726963 master-0 kubenswrapper[4119]: I0216 20:55:11.726910 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:11.910466 master-0 kubenswrapper[4119]: I0216 20:55:11.910413 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:11.911285 master-0 kubenswrapper[4119]: I0216 20:55:11.911255 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:11.911329 master-0 kubenswrapper[4119]: I0216 20:55:11.911289 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:11.911329 master-0 kubenswrapper[4119]: I0216 20:55:11.911298 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:11.911602 master-0 kubenswrapper[4119]: I0216 20:55:11.911583 4119 scope.go:117] "RemoveContainer" containerID="fa6acd923eb6f7f36904bc4aad9d4f575490f5bfc07409635501cc3f249e2be8" Feb 16 20:55:11.911900 master-0 kubenswrapper[4119]: E0216 20:55:11.911874 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 20:55:12.726760 master-0 kubenswrapper[4119]: I0216 20:55:12.726642 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:12.919278 master-0 kubenswrapper[4119]: I0216 20:55:12.919234 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 16 20:55:13.038844 master-0 kubenswrapper[4119]: W0216 20:55:13.038787 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:13.039025 master-0 kubenswrapper[4119]: E0216 20:55:13.038848 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:13.138910 master-0 kubenswrapper[4119]: W0216 20:55:13.138788 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:13.139111 master-0 kubenswrapper[4119]: E0216 20:55:13.138919 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:13.321070 master-0 kubenswrapper[4119]: W0216 20:55:13.320997 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:13.321070 master-0 kubenswrapper[4119]: E0216 20:55:13.321068 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:13.418741 master-0 kubenswrapper[4119]: W0216 20:55:13.418616 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:13.418741 master-0 kubenswrapper[4119]: E0216 20:55:13.418710 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 20:55:13.726996 master-0 kubenswrapper[4119]: I0216 20:55:13.726906 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 20:55:13.938148 master-0 kubenswrapper[4119]: I0216 20:55:13.937923 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:13.938148 master-0 kubenswrapper[4119]: I0216 20:55:13.938002 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerDied","Data":"2dca4633ccf4f45bb4ab9181df018e7f5607187bc3ce7c60613bb7c75dbb3049"} Feb 16 20:55:13.939112 master-0 kubenswrapper[4119]: I0216 20:55:13.937750 4119 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="2dca4633ccf4f45bb4ab9181df018e7f5607187bc3ce7c60613bb7c75dbb3049" exitCode=0 Feb 16 20:55:13.939806 master-0 kubenswrapper[4119]: I0216 20:55:13.939749 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:13.939947 master-0 kubenswrapper[4119]: I0216 20:55:13.939812 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:13.939947 master-0 kubenswrapper[4119]: I0216 20:55:13.939831 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:13.943603 master-0 kubenswrapper[4119]: I0216 20:55:13.943556 4119 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="3bfeaa29dd18a9c052679918402bc8ad83eaec394fa47c6b58ac63f5cfd4bce4" exitCode=1 Feb 16 20:55:13.943796 master-0 kubenswrapper[4119]: I0216 20:55:13.943686 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"3bfeaa29dd18a9c052679918402bc8ad83eaec394fa47c6b58ac63f5cfd4bce4"} Feb 16 20:55:13.949630 master-0 kubenswrapper[4119]: I0216 20:55:13.949545 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"f06b93dc1f7853f1547eea454f40e687d56a498fbbe7a281e785547401b0538b"} Feb 16 20:55:13.949804 master-0 kubenswrapper[4119]: I0216 20:55:13.949714 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:13.950854 master-0 kubenswrapper[4119]: I0216 20:55:13.950802 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:13.950854 master-0 kubenswrapper[4119]: I0216 20:55:13.950849 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:13.951087 master-0 kubenswrapper[4119]: I0216 20:55:13.950866 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:13.952961 master-0 kubenswrapper[4119]: I0216 20:55:13.952921 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:13.954279 master-0 kubenswrapper[4119]: I0216 20:55:13.954233 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:13.954417 master-0 kubenswrapper[4119]: I0216 20:55:13.954286 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:13.954519 master-0 kubenswrapper[4119]: I0216 20:55:13.954428 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:14.867775 master-0 kubenswrapper[4119]: E0216 20:55:14.867724 4119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 20:55:14.961872 master-0 kubenswrapper[4119]: I0216 20:55:14.960780 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"917b8b89b52fc1ea526b8dd828bd51e4ae2f231263633fb2c2bfa2d5e4419132"} Feb 16 20:55:14.961872 master-0 kubenswrapper[4119]: I0216 20:55:14.960834 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:14.964204 master-0 kubenswrapper[4119]: I0216 20:55:14.964165 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:14.964261 master-0 kubenswrapper[4119]: I0216 20:55:14.964221 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:14.964261 master-0 kubenswrapper[4119]: I0216 20:55:14.964236 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:15.665016 master-0 kubenswrapper[4119]: I0216 20:55:15.664838 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:15.731019 master-0 kubenswrapper[4119]: I0216 20:55:15.730961 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:15.969723 master-0 kubenswrapper[4119]: I0216 20:55:15.969585 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"7e0471aa80085ed85cb40c9b3c8ab6f80ea1655f1734a052a840a434c72c54f4"} Feb 16 20:55:15.970168 master-0 kubenswrapper[4119]: I0216 20:55:15.969801 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:15.970792 master-0 kubenswrapper[4119]: I0216 20:55:15.970750 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:15.971139 master-0 kubenswrapper[4119]: I0216 20:55:15.970794 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:15.971139 master-0 kubenswrapper[4119]: I0216 20:55:15.970804 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:15.971139 master-0 kubenswrapper[4119]: I0216 20:55:15.971107 4119 scope.go:117] "RemoveContainer" containerID="3bfeaa29dd18a9c052679918402bc8ad83eaec394fa47c6b58ac63f5cfd4bce4" Feb 16 20:55:16.749665 master-0 kubenswrapper[4119]: I0216 20:55:16.749600 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:16.974751 master-0 kubenswrapper[4119]: I0216 20:55:16.973732 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247"} Feb 16 20:55:16.974751 master-0 kubenswrapper[4119]: I0216 20:55:16.973795 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:16.974751 master-0 kubenswrapper[4119]: I0216 20:55:16.974408 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:16.974751 master-0 kubenswrapper[4119]: I0216 20:55:16.974451 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:16.974751 master-0 kubenswrapper[4119]: I0216 20:55:16.974461 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:17.344023 master-0 kubenswrapper[4119]: E0216 20:55:17.343945 4119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 20:55:17.545481 master-0 kubenswrapper[4119]: E0216 20:55:17.545286 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d581497fb31c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.724050716 +0000 UTC m=+0.653976774,LastTimestamp:2026-02-16 20:55:04.724050716 +0000 UTC m=+0.653976774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.551032 master-0 kubenswrapper[4119]: E0216 20:55:17.550878 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cefc9d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.78172821 +0000 UTC m=+0.711654238,LastTimestamp:2026-02-16 20:55:04.78172821 +0000 UTC m=+0.711654238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.559610 master-0 kubenswrapper[4119]: E0216 20:55:17.559451 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf16505 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781833477 +0000 UTC m=+0.711759495,LastTimestamp:2026-02-16 20:55:04.781833477 +0000 UTC m=+0.711759495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.567528 master-0 kubenswrapper[4119]: E0216 20:55:17.567204 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf26a3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781900351 +0000 UTC m=+0.711826369,LastTimestamp:2026-02-16 20:55:04.781900351 +0000 UTC m=+0.711826369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.575065 master-0 kubenswrapper[4119]: E0216 20:55:17.574810 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d581521d4af6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.86859647 +0000 UTC m=+0.798522488,LastTimestamp:2026-02-16 20:55:04.86859647 +0000 UTC m=+0.798522488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.589971 master-0 kubenswrapper[4119]: E0216 20:55:17.589819 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cefc9d2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cefc9d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.78172821 +0000 UTC m=+0.711654238,LastTimestamp:2026-02-16 20:55:04.967964887 +0000 UTC m=+0.897890915,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.590804 master-0 kubenswrapper[4119]: I0216 20:55:17.590704 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:17.592038 master-0 kubenswrapper[4119]: I0216 20:55:17.592007 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:17.592105 master-0 kubenswrapper[4119]: I0216 20:55:17.592063 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:17.592105 master-0 kubenswrapper[4119]: I0216 20:55:17.592084 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:17.592177 master-0 kubenswrapper[4119]: I0216 20:55:17.592150 4119 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 20:55:17.596802 master-0 kubenswrapper[4119]: E0216 20:55:17.596720 4119 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 16 20:55:17.597683 master-0 kubenswrapper[4119]: E0216 20:55:17.597098 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf16505\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf16505 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781833477 +0000 UTC m=+0.711759495,LastTimestamp:2026-02-16 20:55:04.967988419 +0000 UTC m=+0.897914447,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.601825 master-0 kubenswrapper[4119]: E0216 20:55:17.601678 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf26a3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf26a3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781900351 +0000 UTC m=+0.711826369,LastTimestamp:2026-02-16 20:55:04.967998909 +0000 UTC m=+0.897924937,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.607604 master-0 kubenswrapper[4119]: E0216 20:55:17.607435 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cefc9d2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cefc9d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.78172821 +0000 UTC m=+0.711654238,LastTimestamp:2026-02-16 20:55:04.975952842 +0000 UTC m=+0.905878900,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.612871 master-0 kubenswrapper[4119]: E0216 20:55:17.612716 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf16505\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf16505 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781833477 +0000 UTC m=+0.711759495,LastTimestamp:2026-02-16 20:55:04.975998306 +0000 UTC m=+0.905924364,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.619755 master-0 kubenswrapper[4119]: E0216 20:55:17.619582 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf26a3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf26a3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781900351 +0000 UTC m=+0.711826369,LastTimestamp:2026-02-16 20:55:04.976030678 +0000 UTC m=+0.905956736,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.627012 master-0 kubenswrapper[4119]: E0216 20:55:17.626637 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cefc9d2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cefc9d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.78172821 +0000 UTC m=+0.711654238,LastTimestamp:2026-02-16 20:55:04.977578238 +0000 UTC m=+0.907504276,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.633631 master-0 kubenswrapper[4119]: E0216 20:55:17.633497 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf16505\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf16505 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781833477 +0000 UTC m=+0.711759495,LastTimestamp:2026-02-16 20:55:04.977598899 +0000 UTC m=+0.907524927,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.639001 master-0 kubenswrapper[4119]: E0216 20:55:17.638825 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf26a3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf26a3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781900351 +0000 UTC m=+0.711826369,LastTimestamp:2026-02-16 20:55:04.9776115 +0000 UTC m=+0.907537538,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.643932 master-0 kubenswrapper[4119]: E0216 20:55:17.643805 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cefc9d2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cefc9d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.78172821 +0000 UTC m=+0.711654238,LastTimestamp:2026-02-16 20:55:04.978145854 +0000 UTC m=+0.908071922,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.650598 master-0 kubenswrapper[4119]: E0216 20:55:17.650430 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf16505\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf16505 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781833477 +0000 UTC m=+0.711759495,LastTimestamp:2026-02-16 20:55:04.978257111 +0000 UTC m=+0.908183169,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.658015 master-0 kubenswrapper[4119]: E0216 20:55:17.657855 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf26a3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf26a3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781900351 +0000 UTC m=+0.711826369,LastTimestamp:2026-02-16 20:55:04.978278433 +0000 UTC m=+0.908204491,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.662829 master-0 kubenswrapper[4119]: E0216 20:55:17.662594 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cefc9d2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cefc9d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.78172821 +0000 UTC m=+0.711654238,LastTimestamp:2026-02-16 20:55:04.981189111 +0000 UTC m=+0.911115139,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.667804 master-0 kubenswrapper[4119]: E0216 20:55:17.667643 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf16505\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf16505 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781833477 +0000 UTC m=+0.711759495,LastTimestamp:2026-02-16 20:55:04.981222513 +0000 UTC m=+0.911148541,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.674052 master-0 kubenswrapper[4119]: E0216 20:55:17.673839 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf26a3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf26a3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781900351 +0000 UTC m=+0.711826369,LastTimestamp:2026-02-16 20:55:04.981240384 +0000 UTC m=+0.911166412,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.680129 master-0 kubenswrapper[4119]: E0216 20:55:17.679985 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cefc9d2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cefc9d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.78172821 +0000 UTC m=+0.711654238,LastTimestamp:2026-02-16 20:55:04.98133729 +0000 UTC m=+0.911263318,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.685214 master-0 kubenswrapper[4119]: E0216 20:55:17.685001 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf16505\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf16505 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781833477 +0000 UTC m=+0.711759495,LastTimestamp:2026-02-16 20:55:04.981367542 +0000 UTC m=+0.911293570,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.689994 master-0 kubenswrapper[4119]: E0216 20:55:17.689832 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf26a3f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf26a3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781900351 +0000 UTC m=+0.711826369,LastTimestamp:2026-02-16 20:55:04.981397474 +0000 UTC m=+0.911323512,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.695557 master-0 kubenswrapper[4119]: E0216 20:55:17.695411 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cefc9d2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cefc9d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.78172821 +0000 UTC m=+0.711654238,LastTimestamp:2026-02-16 20:55:04.984908301 +0000 UTC m=+0.914834329,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.699924 master-0 kubenswrapper[4119]: E0216 20:55:17.699728 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894d5814cf16505\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894d5814cf16505 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:04.781833477 +0000 UTC m=+0.711759495,LastTimestamp:2026-02-16 20:55:04.984962945 +0000 UTC m=+0.914888973,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.707279 master-0 kubenswrapper[4119]: E0216 20:55:17.707122 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894d581927b0ddb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:05.948483035 +0000 UTC m=+1.878409083,LastTimestamp:2026-02-16 20:55:05.948483035 +0000 UTC m=+1.878409083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.713935 master-0 kubenswrapper[4119]: E0216 20:55:17.713836 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894d581927f0ea2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:05.948745378 +0000 UTC m=+1.878671436,LastTimestamp:2026-02-16 20:55:05.948745378 +0000 UTC m=+1.878671436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.719251 master-0 kubenswrapper[4119]: E0216 20:55:17.719133 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d58194a7ed17 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:05.984978199 +0000 UTC m=+1.914904227,LastTimestamp:2026-02-16 20:55:05.984978199 +0000 UTC m=+1.914904227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.725178 master-0 kubenswrapper[4119]: E0216 20:55:17.725059 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d581983fca45 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:06.045262405 +0000 UTC m=+1.975188433,LastTimestamp:2026-02-16 20:55:06.045262405 +0000 UTC m=+1.975188433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.731603 master-0 kubenswrapper[4119]: I0216 20:55:17.731553 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:17.731972 master-0 kubenswrapper[4119]: E0216 20:55:17.731884 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d5819b7ae05c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:06.099466332 +0000 UTC m=+2.029392360,LastTimestamp:2026-02-16 20:55:06.099466332 +0000 UTC m=+2.029392360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.740597 master-0 kubenswrapper[4119]: E0216 20:55:17.740264 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d58205d88dd0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" in 1.784s (1.784s including waiting). Image size: 459915626 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:07.88399048 +0000 UTC m=+3.813916498,LastTimestamp:2026-02-16 20:55:07.88399048 +0000 UTC m=+3.813916498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.748031 master-0 kubenswrapper[4119]: E0216 20:55:17.747900 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d58213131a79 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:08.105931385 +0000 UTC m=+4.035857403,LastTimestamp:2026-02-16 20:55:08.105931385 +0000 UTC m=+4.035857403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.754633 master-0 kubenswrapper[4119]: E0216 20:55:17.754457 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d58232d571fe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:08.63876147 +0000 UTC m=+4.568687498,LastTimestamp:2026-02-16 20:55:08.63876147 +0000 UTC m=+4.568687498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.762044 master-0 kubenswrapper[4119]: E0216 20:55:17.760437 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894d5823afed03f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\" in 2.827s (2.827s including waiting). Image size: 524042902 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:08.775690303 +0000 UTC m=+4.705616331,LastTimestamp:2026-02-16 20:55:08.775690303 +0000 UTC m=+4.705616331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.769421 master-0 kubenswrapper[4119]: E0216 20:55:17.769235 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d582428b655a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:08.902344026 +0000 UTC m=+4.832270054,LastTimestamp:2026-02-16 20:55:08.902344026 +0000 UTC m=+4.832270054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.780093 master-0 kubenswrapper[4119]: E0216 20:55:17.776737 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894d58247b92adb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:08.989229787 +0000 UTC m=+4.919155805,LastTimestamp:2026-02-16 20:55:08.989229787 +0000 UTC m=+4.919155805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.783363 master-0 kubenswrapper[4119]: E0216 20:55:17.782934 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894d5824892123c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:09.003444796 +0000 UTC m=+4.933370814,LastTimestamp:2026-02-16 20:55:09.003444796 +0000 UTC m=+4.933370814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.793937 master-0 kubenswrapper[4119]: E0216 20:55:17.793181 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894d58248b68b1d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:09.005835037 +0000 UTC m=+4.935761055,LastTimestamp:2026-02-16 20:55:09.005835037 +0000 UTC m=+4.935761055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.804040 master-0 kubenswrapper[4119]: E0216 20:55:17.803906 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d58250086428 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:09.128639528 +0000 UTC m=+5.058565546,LastTimestamp:2026-02-16 20:55:09.128639528 +0000 UTC m=+5.058565546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.809617 master-0 kubenswrapper[4119]: E0216 20:55:17.809513 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d58250af7b39 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:09.139589945 +0000 UTC m=+5.069515963,LastTimestamp:2026-02-16 20:55:09.139589945 +0000 UTC m=+5.069515963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.825102 master-0 kubenswrapper[4119]: I0216 20:55:17.825035 4119 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:17.829877 master-0 kubenswrapper[4119]: E0216 20:55:17.829637 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894d5825378678c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:09.186312076 +0000 UTC m=+5.116238094,LastTimestamp:2026-02-16 20:55:09.186312076 +0000 UTC m=+5.116238094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.843353 master-0 kubenswrapper[4119]: I0216 20:55:17.842702 4119 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:17.845926 master-0 kubenswrapper[4119]: E0216 20:55:17.845796 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894d5825492f1f7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:09.204828663 +0000 UTC m=+5.134754681,LastTimestamp:2026-02-16 20:55:09.204828663 +0000 UTC m=+5.134754681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.857043 master-0 kubenswrapper[4119]: E0216 20:55:17.856549 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894d582428b655a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d582428b655a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:08.902344026 +0000 UTC m=+4.832270054,LastTimestamp:2026-02-16 20:55:09.907264343 +0000 UTC m=+5.837190361,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.862347 master-0 kubenswrapper[4119]: E0216 20:55:17.862247 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894d58250086428\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d58250086428 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:09.128639528 +0000 UTC m=+5.058565546,LastTimestamp:2026-02-16 20:55:10.136121015 +0000 UTC m=+6.066047033,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.869074 master-0 kubenswrapper[4119]: E0216 20:55:17.868448 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894d58250af7b39\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d58250af7b39 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:09.139589945 +0000 UTC m=+5.069515963,LastTimestamp:2026-02-16 20:55:10.147627261 +0000 UTC m=+6.077553279,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.876762 master-0 kubenswrapper[4119]: E0216 20:55:17.876570 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d582ba40b18e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:10.910714254 +0000 UTC m=+6.840640282,LastTimestamp:2026-02-16 20:55:10.910714254 +0000 UTC m=+6.840640282,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.882918 master-0 kubenswrapper[4119]: E0216 20:55:17.882799 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894d582ba40b18e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d582ba40b18e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:10.910714254 +0000 UTC m=+6.840640282,LastTimestamp:2026-02-16 20:55:11.911841745 +0000 UTC m=+7.841767763,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.889513 master-0 kubenswrapper[4119]: E0216 20:55:17.889373 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d5832750e8ff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 6.695s (6.695s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:12.740493567 +0000 UTC m=+8.670419585,LastTimestamp:2026-02-16 20:55:12.740493567 +0000 UTC m=+8.670419585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.895164 master-0 kubenswrapper[4119]: E0216 20:55:17.894765 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d58329b7b9b9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 6.795s (6.795s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:12.780786105 +0000 UTC m=+8.710712123,LastTimestamp:2026-02-16 20:55:12.780786105 +0000 UTC m=+8.710712123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.900889 master-0 kubenswrapper[4119]: E0216 20:55:17.900771 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894d5832a82f759 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 6.845s (6.845s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:12.794105689 +0000 UTC m=+8.724031707,LastTimestamp:2026-02-16 20:55:12.794105689 +0000 UTC m=+8.724031707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.905092 master-0 kubenswrapper[4119]: E0216 20:55:17.904926 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d5833703f7a6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:13.003886502 +0000 UTC m=+8.933812540,LastTimestamp:2026-02-16 20:55:13.003886502 +0000 UTC m=+8.933812540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.912109 master-0 kubenswrapper[4119]: E0216 20:55:17.911991 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894d583372317e6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:13.005926374 +0000 UTC m=+8.935852412,LastTimestamp:2026-02-16 20:55:13.005926374 +0000 UTC m=+8.935852412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.919634 master-0 kubenswrapper[4119]: E0216 20:55:17.919496 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d58337e9e5ab kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:13.018955179 +0000 UTC m=+8.948881197,LastTimestamp:2026-02-16 20:55:13.018955179 +0000 UTC m=+8.948881197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.924939 master-0 kubenswrapper[4119]: E0216 20:55:17.924847 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d5833fb5987f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:13.149745279 +0000 UTC m=+9.079671297,LastTimestamp:2026-02-16 20:55:13.149745279 +0000 UTC m=+9.079671297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.931679 master-0 kubenswrapper[4119]: E0216 20:55:17.931396 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894d5833fb75455 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:13.149858901 +0000 UTC m=+9.079784949,LastTimestamp:2026-02-16 20:55:13.149858901 +0000 UTC m=+9.079784949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.937047 master-0 kubenswrapper[4119]: E0216 20:55:17.936901 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d5833fbc89e2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:13.15020029 +0000 UTC m=+9.080126308,LastTimestamp:2026-02-16 20:55:13.15020029 +0000 UTC m=+9.080126308,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.941121 master-0 kubenswrapper[4119]: E0216 20:55:17.941047 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d5833fcb552b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:13.151169835 +0000 UTC m=+9.081095853,LastTimestamp:2026-02-16 20:55:13.151169835 +0000 UTC m=+9.081095853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.945286 master-0 kubenswrapper[4119]: E0216 20:55:17.945181 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d5836f93b8b9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:13.952831673 +0000 UTC m=+9.882757721,LastTimestamp:2026-02-16 20:55:13.952831673 +0000 UTC m=+9.882757721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.949155 master-0 kubenswrapper[4119]: E0216 20:55:17.949079 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d5838af0ad73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:14.411908467 +0000 UTC m=+10.341834485,LastTimestamp:2026-02-16 20:55:14.411908467 +0000 UTC m=+10.341834485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.953020 master-0 kubenswrapper[4119]: E0216 20:55:17.952895 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d5838b6fa55f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:14.420229471 +0000 UTC m=+10.350155489,LastTimestamp:2026-02-16 20:55:14.420229471 +0000 UTC m=+10.350155489,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.956801 master-0 kubenswrapper[4119]: E0216 20:55:17.956679 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d5838b7be226 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:14.421031462 +0000 UTC m=+10.350957480,LastTimestamp:2026-02-16 20:55:14.421031462 +0000 UTC m=+10.350957480,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.966625 master-0 kubenswrapper[4119]: E0216 20:55:17.966512 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d583bf046e54 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\" in 2.134s (2.134s including waiting). Image size: 500068323 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:15.28561826 +0000 UTC m=+11.215544278,LastTimestamp:2026-02-16 20:55:15.28561826 +0000 UTC m=+11.215544278,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.970720 master-0 kubenswrapper[4119]: E0216 20:55:17.970556 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d583cff9c27a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:15.570131578 +0000 UTC m=+11.500057596,LastTimestamp:2026-02-16 20:55:15.570131578 +0000 UTC m=+11.500057596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.975149 master-0 kubenswrapper[4119]: E0216 20:55:17.975015 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d583d12c0d40 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:15.590204736 +0000 UTC m=+11.520130754,LastTimestamp:2026-02-16 20:55:15.590204736 +0000 UTC m=+11.520130754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.980026 master-0 kubenswrapper[4119]: I0216 20:55:17.979971 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"5e7c38ffeebe9ecd58ceaa66f0e5d878c7328cfe4f821ef677aab62956457cf2"} Feb 16 20:55:17.980095 master-0 kubenswrapper[4119]: I0216 20:55:17.980030 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:17.980129 master-0 kubenswrapper[4119]: I0216 20:55:17.980065 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:17.980417 master-0 kubenswrapper[4119]: I0216 20:55:17.980376 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:17.981269 master-0 kubenswrapper[4119]: I0216 20:55:17.981218 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:17.981269 master-0 kubenswrapper[4119]: I0216 20:55:17.981244 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:17.981368 master-0 kubenswrapper[4119]: I0216 20:55:17.981274 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:17.981368 master-0 kubenswrapper[4119]: I0216 20:55:17.981287 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:17.981927 master-0 kubenswrapper[4119]: I0216 20:55:17.981251 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:17.981927 master-0 kubenswrapper[4119]: I0216 20:55:17.981444 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:17.983373 master-0 kubenswrapper[4119]: E0216 20:55:17.983203 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d583e7ff3671 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:15.973142129 +0000 UTC m=+11.903068137,LastTimestamp:2026-02-16 20:55:15.973142129 +0000 UTC m=+11.903068137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.990695 master-0 kubenswrapper[4119]: E0216 20:55:17.990406 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1894d58337e9e5ab\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d58337e9e5ab kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:13.018955179 +0000 UTC m=+8.948881197,LastTimestamp:2026-02-16 20:55:16.176442805 +0000 UTC m=+12.106368833,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.994788 master-0 kubenswrapper[4119]: E0216 20:55:17.994673 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1894d5833fbc89e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d5833fbc89e2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:13.15020029 +0000 UTC m=+9.080126308,LastTimestamp:2026-02-16 20:55:16.184629736 +0000 UTC m=+12.114555754,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:17.999184 master-0 kubenswrapper[4119]: E0216 20:55:17.999060 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d58451f117d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" in 3.329s (3.329s including waiting). Image size: 509806416 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:17.75060168 +0000 UTC m=+13.680527738,LastTimestamp:2026-02-16 20:55:17.75060168 +0000 UTC m=+13.680527738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:18.003715 master-0 kubenswrapper[4119]: E0216 20:55:18.003582 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d5845ea93404 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:17.963994116 +0000 UTC m=+13.893920124,LastTimestamp:2026-02-16 20:55:17.963994116 +0000 UTC m=+13.893920124,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:18.007553 master-0 kubenswrapper[4119]: E0216 20:55:18.007432 4119 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d5845f2cb470 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:17.972612208 +0000 UTC m=+13.902538226,LastTimestamp:2026-02-16 20:55:17.972612208 +0000 UTC m=+13.902538226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:18.009578 master-0 kubenswrapper[4119]: I0216 20:55:18.009522 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:18.689039 master-0 kubenswrapper[4119]: I0216 20:55:18.688961 4119 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:18.694392 master-0 kubenswrapper[4119]: I0216 20:55:18.694348 4119 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:18.735710 master-0 kubenswrapper[4119]: I0216 20:55:18.735608 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:18.984065 master-0 kubenswrapper[4119]: I0216 20:55:18.983896 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:18.984923 master-0 kubenswrapper[4119]: I0216 20:55:18.983966 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:18.984995 master-0 kubenswrapper[4119]: I0216 20:55:18.984229 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:18.986273 master-0 kubenswrapper[4119]: I0216 20:55:18.986210 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:18.986273 master-0 kubenswrapper[4119]: I0216 20:55:18.986266 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:18.986437 master-0 kubenswrapper[4119]: I0216 20:55:18.986312 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:18.986877 master-0 kubenswrapper[4119]: I0216 20:55:18.986821 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:18.986961 master-0 kubenswrapper[4119]: I0216 20:55:18.986887 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:18.986961 master-0 kubenswrapper[4119]: I0216 20:55:18.986912 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:18.991348 master-0 kubenswrapper[4119]: I0216 20:55:18.991281 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:55:19.230956 master-0 kubenswrapper[4119]: I0216 20:55:19.230790 4119 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 20:55:19.250929 master-0 kubenswrapper[4119]: I0216 20:55:19.250465 4119 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 20:55:19.734049 master-0 kubenswrapper[4119]: I0216 20:55:19.733795 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:19.994625 master-0 kubenswrapper[4119]: I0216 20:55:19.994428 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:19.994625 master-0 kubenswrapper[4119]: I0216 20:55:19.994449 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:19.995798 master-0 kubenswrapper[4119]: I0216 20:55:19.995709 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:19.995798 master-0 kubenswrapper[4119]: I0216 20:55:19.995802 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:19.995951 master-0 kubenswrapper[4119]: I0216 20:55:19.995819 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:19.996231 master-0 kubenswrapper[4119]: I0216 20:55:19.996182 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:19.996308 master-0 kubenswrapper[4119]: I0216 20:55:19.996241 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:19.996308 master-0 kubenswrapper[4119]: I0216 20:55:19.996261 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:20.304888 master-0 kubenswrapper[4119]: W0216 20:55:20.304784 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 16 20:55:20.304888 master-0 kubenswrapper[4119]: E0216 20:55:20.304860 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 16 20:55:20.734832 master-0 kubenswrapper[4119]: I0216 20:55:20.734564 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:20.997131 master-0 kubenswrapper[4119]: I0216 20:55:20.997037 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:20.998129 master-0 kubenswrapper[4119]: I0216 20:55:20.998083 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:20.998214 master-0 kubenswrapper[4119]: I0216 20:55:20.998141 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:20.998214 master-0 kubenswrapper[4119]: I0216 20:55:20.998153 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:21.035457 master-0 kubenswrapper[4119]: W0216 20:55:21.035349 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:21.035457 master-0 kubenswrapper[4119]: E0216 20:55:21.035408 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 16 20:55:21.591389 master-0 kubenswrapper[4119]: W0216 20:55:21.591301 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 16 20:55:21.591782 master-0 kubenswrapper[4119]: E0216 20:55:21.591402 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 16 20:55:21.732627 master-0 kubenswrapper[4119]: I0216 20:55:21.732559 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:22.733006 master-0 kubenswrapper[4119]: I0216 20:55:22.732926 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:23.734775 master-0 kubenswrapper[4119]: I0216 20:55:23.734536 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:23.803629 master-0 kubenswrapper[4119]: I0216 20:55:23.803511 4119 csr.go:261] certificate signing request csr-fj74s is approved, waiting to be issued Feb 16 20:55:24.093717 master-0 kubenswrapper[4119]: I0216 20:55:24.093543 4119 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:24.094113 master-0 kubenswrapper[4119]: I0216 20:55:24.093820 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:24.095423 master-0 kubenswrapper[4119]: I0216 20:55:24.095353 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:24.095423 master-0 kubenswrapper[4119]: I0216 20:55:24.095428 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:24.095609 master-0 kubenswrapper[4119]: I0216 20:55:24.095447 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:24.100969 master-0 kubenswrapper[4119]: I0216 20:55:24.100916 4119 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:24.352335 master-0 kubenswrapper[4119]: E0216 20:55:24.352135 4119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 20:55:24.597864 master-0 kubenswrapper[4119]: I0216 20:55:24.597739 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:24.599725 master-0 kubenswrapper[4119]: I0216 20:55:24.599611 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:24.599838 master-0 kubenswrapper[4119]: I0216 20:55:24.599740 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:24.599838 master-0 kubenswrapper[4119]: I0216 20:55:24.599769 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:24.599964 master-0 kubenswrapper[4119]: I0216 20:55:24.599844 4119 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 20:55:24.608105 master-0 kubenswrapper[4119]: E0216 20:55:24.607954 4119 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 16 20:55:24.733321 master-0 kubenswrapper[4119]: I0216 20:55:24.733234 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:24.869109 master-0 kubenswrapper[4119]: E0216 20:55:24.868723 4119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 20:55:25.003075 master-0 kubenswrapper[4119]: W0216 20:55:25.003006 4119 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 16 20:55:25.003292 master-0 kubenswrapper[4119]: E0216 20:55:25.003087 4119 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 16 20:55:25.024215 master-0 kubenswrapper[4119]: I0216 20:55:25.024122 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:25.025043 master-0 kubenswrapper[4119]: I0216 20:55:25.024984 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:25.025043 master-0 kubenswrapper[4119]: I0216 20:55:25.025030 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:25.025274 master-0 kubenswrapper[4119]: I0216 20:55:25.025083 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:25.030406 master-0 kubenswrapper[4119]: I0216 20:55:25.030328 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:25.734230 master-0 kubenswrapper[4119]: I0216 20:55:25.734142 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:26.026465 master-0 kubenswrapper[4119]: I0216 20:55:26.026371 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:26.027836 master-0 kubenswrapper[4119]: I0216 20:55:26.027763 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:26.027990 master-0 kubenswrapper[4119]: I0216 20:55:26.027848 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:26.027990 master-0 kubenswrapper[4119]: I0216 20:55:26.027875 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:26.032866 master-0 kubenswrapper[4119]: I0216 20:55:26.032831 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:55:26.734273 master-0 kubenswrapper[4119]: I0216 20:55:26.734016 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:26.875176 master-0 kubenswrapper[4119]: I0216 20:55:26.875075 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:26.876460 master-0 kubenswrapper[4119]: I0216 20:55:26.876422 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:26.876545 master-0 kubenswrapper[4119]: I0216 20:55:26.876473 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:26.876545 master-0 kubenswrapper[4119]: I0216 20:55:26.876489 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:26.877054 master-0 kubenswrapper[4119]: I0216 20:55:26.877025 4119 scope.go:117] "RemoveContainer" containerID="fa6acd923eb6f7f36904bc4aad9d4f575490f5bfc07409635501cc3f249e2be8" Feb 16 20:55:26.885170 master-0 kubenswrapper[4119]: E0216 20:55:26.885055 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894d582428b655a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d582428b655a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:08.902344026 +0000 UTC m=+4.832270054,LastTimestamp:2026-02-16 20:55:26.87925595 +0000 UTC m=+22.809181988,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:27.029435 master-0 kubenswrapper[4119]: I0216 20:55:27.029345 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:27.030630 master-0 kubenswrapper[4119]: I0216 20:55:27.030570 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:27.030630 master-0 kubenswrapper[4119]: I0216 20:55:27.030616 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:27.030630 master-0 kubenswrapper[4119]: I0216 20:55:27.030628 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:27.150122 master-0 kubenswrapper[4119]: E0216 20:55:27.149883 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894d58250086428\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d58250086428 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:09.128639528 +0000 UTC m=+5.058565546,LastTimestamp:2026-02-16 20:55:27.140984422 +0000 UTC m=+23.070910480,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:27.169485 master-0 kubenswrapper[4119]: E0216 20:55:27.169230 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894d58250af7b39\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d58250af7b39 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:09.139589945 +0000 UTC m=+5.069515963,LastTimestamp:2026-02-16 20:55:27.158332499 +0000 UTC m=+23.088258547,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:27.734610 master-0 kubenswrapper[4119]: I0216 20:55:27.734463 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:28.033973 master-0 kubenswrapper[4119]: I0216 20:55:28.033825 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 20:55:28.035031 master-0 kubenswrapper[4119]: I0216 20:55:28.034438 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 16 20:55:28.035031 master-0 kubenswrapper[4119]: I0216 20:55:28.035017 4119 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="34cedb032f29de87a57c244cfdac89c6368a83bd489ea19dfd7e57624682d8a7" exitCode=1 Feb 16 20:55:28.035186 master-0 kubenswrapper[4119]: I0216 20:55:28.035050 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"34cedb032f29de87a57c244cfdac89c6368a83bd489ea19dfd7e57624682d8a7"} Feb 16 20:55:28.035186 master-0 kubenswrapper[4119]: I0216 20:55:28.035091 4119 scope.go:117] "RemoveContainer" containerID="fa6acd923eb6f7f36904bc4aad9d4f575490f5bfc07409635501cc3f249e2be8" Feb 16 20:55:28.035186 master-0 kubenswrapper[4119]: I0216 20:55:28.035179 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:28.036472 master-0 kubenswrapper[4119]: I0216 20:55:28.036408 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:28.036472 master-0 kubenswrapper[4119]: I0216 20:55:28.036435 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:28.036472 master-0 kubenswrapper[4119]: I0216 20:55:28.036445 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:28.036857 master-0 kubenswrapper[4119]: I0216 20:55:28.036780 4119 scope.go:117] "RemoveContainer" containerID="34cedb032f29de87a57c244cfdac89c6368a83bd489ea19dfd7e57624682d8a7" Feb 16 20:55:28.036957 master-0 kubenswrapper[4119]: E0216 20:55:28.036930 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 20:55:28.045754 master-0 kubenswrapper[4119]: E0216 20:55:28.045493 4119 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894d582ba40b18e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894d582ba40b18e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:55:10.910714254 +0000 UTC m=+6.840640282,LastTimestamp:2026-02-16 20:55:28.036897567 +0000 UTC m=+23.966823585,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:55:28.732262 master-0 kubenswrapper[4119]: I0216 20:55:28.732182 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:29.039990 master-0 kubenswrapper[4119]: I0216 20:55:29.039864 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 20:55:29.731903 master-0 kubenswrapper[4119]: I0216 20:55:29.731805 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:30.731367 master-0 kubenswrapper[4119]: I0216 20:55:30.731298 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:31.358809 master-0 kubenswrapper[4119]: E0216 20:55:31.358690 4119 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 20:55:31.608531 master-0 kubenswrapper[4119]: I0216 20:55:31.608446 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:31.609783 master-0 kubenswrapper[4119]: I0216 20:55:31.609640 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:31.609783 master-0 kubenswrapper[4119]: I0216 20:55:31.609767 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:31.609882 master-0 kubenswrapper[4119]: I0216 20:55:31.609798 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:31.609882 master-0 kubenswrapper[4119]: I0216 20:55:31.609872 4119 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 20:55:31.617121 master-0 kubenswrapper[4119]: E0216 20:55:31.617072 4119 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 16 20:55:31.730366 master-0 kubenswrapper[4119]: I0216 20:55:31.730300 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:32.730127 master-0 kubenswrapper[4119]: I0216 20:55:32.730056 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:33.733147 master-0 kubenswrapper[4119]: I0216 20:55:33.733041 4119 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 20:55:34.148975 master-0 kubenswrapper[4119]: I0216 20:55:34.148922 4119 csr.go:257] certificate signing request csr-fj74s is issued Feb 16 20:55:34.597300 master-0 kubenswrapper[4119]: I0216 20:55:34.597244 4119 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 20:55:34.745817 master-0 kubenswrapper[4119]: I0216 20:55:34.745757 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:34.768719 master-0 kubenswrapper[4119]: I0216 20:55:34.768637 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:34.825776 master-0 kubenswrapper[4119]: I0216 20:55:34.825710 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:34.869226 master-0 kubenswrapper[4119]: E0216 20:55:34.869045 4119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 20:55:35.087100 master-0 kubenswrapper[4119]: I0216 20:55:35.087041 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:35.087100 master-0 kubenswrapper[4119]: E0216 20:55:35.087105 4119 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 20:55:35.108546 master-0 kubenswrapper[4119]: I0216 20:55:35.108478 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:35.125121 master-0 kubenswrapper[4119]: I0216 20:55:35.125027 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:35.150518 master-0 kubenswrapper[4119]: I0216 20:55:35.150436 4119 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 20:47:22 +0000 UTC, rotation deadline is 2026-02-17 16:02:22.439755145 +0000 UTC Feb 16 20:55:35.150518 master-0 kubenswrapper[4119]: I0216 20:55:35.150509 4119 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h6m47.289265248s for next certificate rotation Feb 16 20:55:35.184349 master-0 kubenswrapper[4119]: I0216 20:55:35.184276 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:35.459426 master-0 kubenswrapper[4119]: I0216 20:55:35.459286 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:35.459426 master-0 kubenswrapper[4119]: E0216 20:55:35.459324 4119 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 20:55:35.561873 master-0 kubenswrapper[4119]: I0216 20:55:35.561821 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:35.578515 master-0 kubenswrapper[4119]: I0216 20:55:35.578473 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:35.635317 master-0 kubenswrapper[4119]: I0216 20:55:35.635279 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:35.903911 master-0 kubenswrapper[4119]: I0216 20:55:35.903842 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:35.903911 master-0 kubenswrapper[4119]: E0216 20:55:35.903878 4119 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 20:55:36.475519 master-0 kubenswrapper[4119]: I0216 20:55:36.475446 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:36.493325 master-0 kubenswrapper[4119]: I0216 20:55:36.493241 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:36.550100 master-0 kubenswrapper[4119]: I0216 20:55:36.550037 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:36.827805 master-0 kubenswrapper[4119]: I0216 20:55:36.827741 4119 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 20:55:36.827805 master-0 kubenswrapper[4119]: E0216 20:55:36.827793 4119 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 20:55:37.354789 master-0 kubenswrapper[4119]: I0216 20:55:37.354696 4119 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 20:55:38.366101 master-0 kubenswrapper[4119]: E0216 20:55:38.366033 4119 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Feb 16 20:55:38.618378 master-0 kubenswrapper[4119]: I0216 20:55:38.618107 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:38.619596 master-0 kubenswrapper[4119]: I0216 20:55:38.619529 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:38.619596 master-0 kubenswrapper[4119]: I0216 20:55:38.619579 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:38.619596 master-0 kubenswrapper[4119]: I0216 20:55:38.619599 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:38.619909 master-0 kubenswrapper[4119]: I0216 20:55:38.619683 4119 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 20:55:38.630383 master-0 kubenswrapper[4119]: I0216 20:55:38.630299 4119 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 20:55:38.630383 master-0 kubenswrapper[4119]: E0216 20:55:38.630357 4119 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 16 20:55:38.640362 master-0 kubenswrapper[4119]: E0216 20:55:38.640294 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:38.740946 master-0 kubenswrapper[4119]: E0216 20:55:38.740868 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:38.842003 master-0 kubenswrapper[4119]: E0216 20:55:38.841933 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:38.942998 master-0 kubenswrapper[4119]: E0216 20:55:38.942811 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:39.043882 master-0 kubenswrapper[4119]: E0216 20:55:39.043811 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:39.144683 master-0 kubenswrapper[4119]: E0216 20:55:39.144561 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:39.245882 master-0 kubenswrapper[4119]: E0216 20:55:39.245697 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:39.346753 master-0 kubenswrapper[4119]: E0216 20:55:39.346465 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:39.389263 master-0 kubenswrapper[4119]: I0216 20:55:39.389156 4119 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 20:55:39.407712 master-0 kubenswrapper[4119]: I0216 20:55:39.407624 4119 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 20:55:39.446819 master-0 kubenswrapper[4119]: E0216 20:55:39.446734 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:39.547591 master-0 kubenswrapper[4119]: E0216 20:55:39.547522 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:39.648810 master-0 kubenswrapper[4119]: E0216 20:55:39.648740 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:39.749594 master-0 kubenswrapper[4119]: E0216 20:55:39.749523 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:39.850669 master-0 kubenswrapper[4119]: E0216 20:55:39.850463 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:39.951822 master-0 kubenswrapper[4119]: E0216 20:55:39.951721 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:39.961479 master-0 kubenswrapper[4119]: I0216 20:55:39.961425 4119 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 20:55:40.052960 master-0 kubenswrapper[4119]: E0216 20:55:40.052833 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:40.154134 master-0 kubenswrapper[4119]: E0216 20:55:40.153904 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:40.254361 master-0 kubenswrapper[4119]: E0216 20:55:40.254282 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:40.355289 master-0 kubenswrapper[4119]: E0216 20:55:40.355217 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:40.455819 master-0 kubenswrapper[4119]: E0216 20:55:40.455583 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:40.556706 master-0 kubenswrapper[4119]: E0216 20:55:40.556570 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:40.657589 master-0 kubenswrapper[4119]: E0216 20:55:40.657501 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:40.758130 master-0 kubenswrapper[4119]: E0216 20:55:40.758018 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:40.858963 master-0 kubenswrapper[4119]: E0216 20:55:40.858838 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:40.874548 master-0 kubenswrapper[4119]: I0216 20:55:40.874483 4119 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:55:40.876099 master-0 kubenswrapper[4119]: I0216 20:55:40.876031 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:55:40.876188 master-0 kubenswrapper[4119]: I0216 20:55:40.876166 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:55:40.876298 master-0 kubenswrapper[4119]: I0216 20:55:40.876197 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:55:40.877034 master-0 kubenswrapper[4119]: I0216 20:55:40.876977 4119 scope.go:117] "RemoveContainer" containerID="34cedb032f29de87a57c244cfdac89c6368a83bd489ea19dfd7e57624682d8a7" Feb 16 20:55:40.877314 master-0 kubenswrapper[4119]: E0216 20:55:40.877262 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 20:55:40.959409 master-0 kubenswrapper[4119]: E0216 20:55:40.959293 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:41.060376 master-0 kubenswrapper[4119]: E0216 20:55:41.060135 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:41.161241 master-0 kubenswrapper[4119]: E0216 20:55:41.161097 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:41.262076 master-0 kubenswrapper[4119]: E0216 20:55:41.261952 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:41.363069 master-0 kubenswrapper[4119]: E0216 20:55:41.362839 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:41.463696 master-0 kubenswrapper[4119]: E0216 20:55:41.463584 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:41.564746 master-0 kubenswrapper[4119]: E0216 20:55:41.564581 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:41.665847 master-0 kubenswrapper[4119]: E0216 20:55:41.665609 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:41.766918 master-0 kubenswrapper[4119]: E0216 20:55:41.766765 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:41.867458 master-0 kubenswrapper[4119]: E0216 20:55:41.867315 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:41.968714 master-0 kubenswrapper[4119]: E0216 20:55:41.968541 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:42.069080 master-0 kubenswrapper[4119]: E0216 20:55:42.068944 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:42.169584 master-0 kubenswrapper[4119]: E0216 20:55:42.169454 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:42.270404 master-0 kubenswrapper[4119]: E0216 20:55:42.270296 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:42.371248 master-0 kubenswrapper[4119]: E0216 20:55:42.371131 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:42.471849 master-0 kubenswrapper[4119]: E0216 20:55:42.471755 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:42.572932 master-0 kubenswrapper[4119]: E0216 20:55:42.572716 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:42.673702 master-0 kubenswrapper[4119]: E0216 20:55:42.673512 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:42.774108 master-0 kubenswrapper[4119]: E0216 20:55:42.773999 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:42.874811 master-0 kubenswrapper[4119]: E0216 20:55:42.874613 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:42.975699 master-0 kubenswrapper[4119]: E0216 20:55:42.975540 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:43.076439 master-0 kubenswrapper[4119]: E0216 20:55:43.076323 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:43.177221 master-0 kubenswrapper[4119]: E0216 20:55:43.177050 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:43.277853 master-0 kubenswrapper[4119]: E0216 20:55:43.277747 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:43.378831 master-0 kubenswrapper[4119]: E0216 20:55:43.378691 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:43.480023 master-0 kubenswrapper[4119]: E0216 20:55:43.479782 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:43.580632 master-0 kubenswrapper[4119]: E0216 20:55:43.580534 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:43.681494 master-0 kubenswrapper[4119]: E0216 20:55:43.681365 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:43.782542 master-0 kubenswrapper[4119]: E0216 20:55:43.782427 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:43.883065 master-0 kubenswrapper[4119]: E0216 20:55:43.882924 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:43.984232 master-0 kubenswrapper[4119]: E0216 20:55:43.984114 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:44.085501 master-0 kubenswrapper[4119]: E0216 20:55:44.085301 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:44.185924 master-0 kubenswrapper[4119]: E0216 20:55:44.185788 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:44.286199 master-0 kubenswrapper[4119]: E0216 20:55:44.286124 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:44.387133 master-0 kubenswrapper[4119]: E0216 20:55:44.386887 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:44.487438 master-0 kubenswrapper[4119]: E0216 20:55:44.487350 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:44.520772 master-0 kubenswrapper[4119]: I0216 20:55:44.520638 4119 csr.go:261] certificate signing request csr-jdnsl is approved, waiting to be issued Feb 16 20:55:44.531611 master-0 kubenswrapper[4119]: I0216 20:55:44.531451 4119 csr.go:257] certificate signing request csr-jdnsl is issued Feb 16 20:55:44.588001 master-0 kubenswrapper[4119]: E0216 20:55:44.587898 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:44.689216 master-0 kubenswrapper[4119]: E0216 20:55:44.688957 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:44.789566 master-0 kubenswrapper[4119]: E0216 20:55:44.789453 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:44.869389 master-0 kubenswrapper[4119]: E0216 20:55:44.869277 4119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 20:55:44.890374 master-0 kubenswrapper[4119]: E0216 20:55:44.890295 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:44.991382 master-0 kubenswrapper[4119]: E0216 20:55:44.991235 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:45.092169 master-0 kubenswrapper[4119]: E0216 20:55:45.091983 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:45.192962 master-0 kubenswrapper[4119]: E0216 20:55:45.192878 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:45.293340 master-0 kubenswrapper[4119]: E0216 20:55:45.293248 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:45.394080 master-0 kubenswrapper[4119]: E0216 20:55:45.394007 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:45.495199 master-0 kubenswrapper[4119]: E0216 20:55:45.495034 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:45.532902 master-0 kubenswrapper[4119]: I0216 20:55:45.532822 4119 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 20:47:22 +0000 UTC, rotation deadline is 2026-02-17 17:23:23.284632836 +0000 UTC Feb 16 20:55:45.532902 master-0 kubenswrapper[4119]: I0216 20:55:45.532877 4119 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h27m37.751760866s for next certificate rotation Feb 16 20:55:45.596196 master-0 kubenswrapper[4119]: E0216 20:55:45.596027 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:45.697151 master-0 kubenswrapper[4119]: E0216 20:55:45.697006 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:45.797826 master-0 kubenswrapper[4119]: E0216 20:55:45.797705 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:45.898276 master-0 kubenswrapper[4119]: E0216 20:55:45.898081 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:45.999200 master-0 kubenswrapper[4119]: E0216 20:55:45.999114 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:46.099894 master-0 kubenswrapper[4119]: E0216 20:55:46.099796 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:46.200598 master-0 kubenswrapper[4119]: E0216 20:55:46.200327 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:46.301055 master-0 kubenswrapper[4119]: E0216 20:55:46.300907 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:46.401978 master-0 kubenswrapper[4119]: E0216 20:55:46.401885 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:46.503043 master-0 kubenswrapper[4119]: E0216 20:55:46.502932 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:46.533993 master-0 kubenswrapper[4119]: I0216 20:55:46.533872 4119 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 20:47:22 +0000 UTC, rotation deadline is 2026-02-17 16:23:07.58730144 +0000 UTC Feb 16 20:55:46.533993 master-0 kubenswrapper[4119]: I0216 20:55:46.533953 4119 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h27m21.053351556s for next certificate rotation Feb 16 20:55:46.603275 master-0 kubenswrapper[4119]: E0216 20:55:46.603147 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:46.704270 master-0 kubenswrapper[4119]: E0216 20:55:46.704182 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:46.804761 master-0 kubenswrapper[4119]: E0216 20:55:46.804554 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:46.905379 master-0 kubenswrapper[4119]: E0216 20:55:46.905309 4119 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:55:46.987378 master-0 kubenswrapper[4119]: I0216 20:55:46.987277 4119 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 20:55:47.181078 master-0 kubenswrapper[4119]: I0216 20:55:47.180842 4119 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 20:55:47.734913 master-0 kubenswrapper[4119]: I0216 20:55:47.734844 4119 apiserver.go:52] "Watching apiserver" Feb 16 20:55:47.740706 master-0 kubenswrapper[4119]: I0216 20:55:47.740576 4119 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 20:55:47.741071 master-0 kubenswrapper[4119]: I0216 20:55:47.740816 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-6llwf","openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw","openshift-network-operator/network-operator-6fcf4c966-n4hfs"] Feb 16 20:55:47.741978 master-0 kubenswrapper[4119]: I0216 20:55:47.741310 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:47.741978 master-0 kubenswrapper[4119]: I0216 20:55:47.741457 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:47.741978 master-0 kubenswrapper[4119]: I0216 20:55:47.741461 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:55:47.744068 master-0 kubenswrapper[4119]: I0216 20:55:47.743966 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 20:55:47.744865 master-0 kubenswrapper[4119]: I0216 20:55:47.744812 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 20:55:47.745213 master-0 kubenswrapper[4119]: I0216 20:55:47.745162 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 16 20:55:47.746184 master-0 kubenswrapper[4119]: I0216 20:55:47.746133 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 20:55:47.746720 master-0 kubenswrapper[4119]: I0216 20:55:47.746644 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 20:55:47.746872 master-0 kubenswrapper[4119]: I0216 20:55:47.746745 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 20:55:47.746872 master-0 kubenswrapper[4119]: I0216 20:55:47.746836 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Feb 16 20:55:47.747081 master-0 kubenswrapper[4119]: I0216 20:55:47.747034 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Feb 16 20:55:47.747197 master-0 kubenswrapper[4119]: I0216 20:55:47.747176 4119 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 16 20:55:47.752139 master-0 kubenswrapper[4119]: I0216 20:55:47.748109 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 20:55:47.828930 master-0 kubenswrapper[4119]: I0216 20:55:47.828830 4119 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 20:55:47.919456 master-0 kubenswrapper[4119]: I0216 20:55:47.919357 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7pk6\" (UniqueName: \"kubernetes.io/projected/1b61063e-775e-421d-bf73-a6ef134293a0-kube-api-access-x7pk6\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:55:47.919456 master-0 kubenswrapper[4119]: I0216 20:55:47.919436 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a012b98-9341-41a3-9321-0a099f8bb9da-service-ca\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:47.919887 master-0 kubenswrapper[4119]: I0216 20:55:47.919528 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-ca-bundle\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:47.919887 master-0 kubenswrapper[4119]: I0216 20:55:47.919596 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a012b98-9341-41a3-9321-0a099f8bb9da-kube-api-access\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:47.919887 master-0 kubenswrapper[4119]: I0216 20:55:47.919634 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:47.919887 master-0 kubenswrapper[4119]: I0216 20:55:47.919708 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-resolv-conf\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:47.919887 master-0 kubenswrapper[4119]: I0216 20:55:47.919793 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1b61063e-775e-421d-bf73-a6ef134293a0-host-etc-kube\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:55:47.919887 master-0 kubenswrapper[4119]: I0216 20:55:47.919833 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-sno-bootstrap-files\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:47.920343 master-0 kubenswrapper[4119]: I0216 20:55:47.919949 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b61063e-775e-421d-bf73-a6ef134293a0-metrics-tls\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:55:47.920343 master-0 kubenswrapper[4119]: I0216 20:55:47.920047 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:47.920343 master-0 kubenswrapper[4119]: I0216 20:55:47.920100 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-var-run-resolv-conf\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:47.920343 master-0 kubenswrapper[4119]: I0216 20:55:47.920142 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5td42\" (UniqueName: \"kubernetes.io/projected/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-kube-api-access-5td42\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:47.920343 master-0 kubenswrapper[4119]: I0216 20:55:47.920234 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:48.020972 master-0 kubenswrapper[4119]: I0216 20:55:48.020874 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-ca-bundle\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:48.020972 master-0 kubenswrapper[4119]: I0216 20:55:48.020957 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a012b98-9341-41a3-9321-0a099f8bb9da-kube-api-access\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:48.020972 master-0 kubenswrapper[4119]: I0216 20:55:48.020980 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-ca-bundle\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:48.021356 master-0 kubenswrapper[4119]: I0216 20:55:48.021020 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-resolv-conf\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:48.021356 master-0 kubenswrapper[4119]: I0216 20:55:48.021067 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-resolv-conf\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:48.021356 master-0 kubenswrapper[4119]: I0216 20:55:48.021081 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1b61063e-775e-421d-bf73-a6ef134293a0-host-etc-kube\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:55:48.021356 master-0 kubenswrapper[4119]: I0216 20:55:48.021149 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:48.021356 master-0 kubenswrapper[4119]: I0216 20:55:48.021151 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1b61063e-775e-421d-bf73-a6ef134293a0-host-etc-kube\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:55:48.021356 master-0 kubenswrapper[4119]: I0216 20:55:48.021340 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-sno-bootstrap-files\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:48.021889 master-0 kubenswrapper[4119]: I0216 20:55:48.021370 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b61063e-775e-421d-bf73-a6ef134293a0-metrics-tls\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:55:48.021889 master-0 kubenswrapper[4119]: I0216 20:55:48.021395 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:48.021889 master-0 kubenswrapper[4119]: I0216 20:55:48.021419 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-var-run-resolv-conf\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:48.021889 master-0 kubenswrapper[4119]: I0216 20:55:48.021443 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5td42\" (UniqueName: \"kubernetes.io/projected/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-kube-api-access-5td42\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:48.021889 master-0 kubenswrapper[4119]: I0216 20:55:48.021469 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:48.021889 master-0 kubenswrapper[4119]: I0216 20:55:48.021498 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7pk6\" (UniqueName: \"kubernetes.io/projected/1b61063e-775e-421d-bf73-a6ef134293a0-kube-api-access-x7pk6\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:55:48.021889 master-0 kubenswrapper[4119]: I0216 20:55:48.021519 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a012b98-9341-41a3-9321-0a099f8bb9da-service-ca\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:48.022418 master-0 kubenswrapper[4119]: I0216 20:55:48.021898 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-var-run-resolv-conf\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:48.022418 master-0 kubenswrapper[4119]: I0216 20:55:48.021949 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:48.022418 master-0 kubenswrapper[4119]: I0216 20:55:48.022001 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-sno-bootstrap-files\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:48.022418 master-0 kubenswrapper[4119]: I0216 20:55:48.022031 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:48.022418 master-0 kubenswrapper[4119]: E0216 20:55:48.022190 4119 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:55:48.022418 master-0 kubenswrapper[4119]: E0216 20:55:48.022314 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:55:48.522276849 +0000 UTC m=+44.452203157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:55:48.022418 master-0 kubenswrapper[4119]: I0216 20:55:48.022330 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a012b98-9341-41a3-9321-0a099f8bb9da-service-ca\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:48.023733 master-0 kubenswrapper[4119]: I0216 20:55:48.023614 4119 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 20:55:48.030683 master-0 kubenswrapper[4119]: I0216 20:55:48.030577 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b61063e-775e-421d-bf73-a6ef134293a0-metrics-tls\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:55:48.051375 master-0 kubenswrapper[4119]: I0216 20:55:48.051315 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5td42\" (UniqueName: \"kubernetes.io/projected/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-kube-api-access-5td42\") pod \"assisted-installer-controller-6llwf\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:48.052099 master-0 kubenswrapper[4119]: I0216 20:55:48.052039 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7pk6\" (UniqueName: \"kubernetes.io/projected/1b61063e-775e-421d-bf73-a6ef134293a0-kube-api-access-x7pk6\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:55:48.052323 master-0 kubenswrapper[4119]: I0216 20:55:48.052253 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a012b98-9341-41a3-9321-0a099f8bb9da-kube-api-access\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:48.072721 master-0 kubenswrapper[4119]: I0216 20:55:48.072432 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:48.091567 master-0 kubenswrapper[4119]: I0216 20:55:48.091484 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:55:48.429621 master-0 kubenswrapper[4119]: I0216 20:55:48.428981 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerStarted","Data":"957c111d10e2d292281a50f8cc278f441c1f3165b491de07cd91b63ab4d96530"} Feb 16 20:55:48.430105 master-0 kubenswrapper[4119]: I0216 20:55:48.430060 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-6llwf" event={"ID":"700bc24c-4b00-44f0-90b0-aa555fe5c7a8","Type":"ContainerStarted","Data":"2e5b179a0033062cd2b178034bcb5784ab1edcaef771f5cac5fd7b9ba67359d1"} Feb 16 20:55:48.525078 master-0 kubenswrapper[4119]: I0216 20:55:48.524977 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:48.525409 master-0 kubenswrapper[4119]: E0216 20:55:48.525121 4119 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:55:48.525409 master-0 kubenswrapper[4119]: E0216 20:55:48.525171 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:55:49.525156661 +0000 UTC m=+45.455082679 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:55:49.532179 master-0 kubenswrapper[4119]: I0216 20:55:49.532111 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:49.532788 master-0 kubenswrapper[4119]: E0216 20:55:49.532263 4119 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:55:49.532788 master-0 kubenswrapper[4119]: E0216 20:55:49.532323 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:55:51.532304472 +0000 UTC m=+47.462230490 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:55:51.548614 master-0 kubenswrapper[4119]: I0216 20:55:51.548533 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:51.549277 master-0 kubenswrapper[4119]: E0216 20:55:51.548752 4119 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:55:51.549277 master-0 kubenswrapper[4119]: E0216 20:55:51.548858 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:55:55.54883792 +0000 UTC m=+51.478763938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:55:52.441073 master-0 kubenswrapper[4119]: I0216 20:55:52.440985 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerStarted","Data":"22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc"} Feb 16 20:55:52.479508 master-0 kubenswrapper[4119]: I0216 20:55:52.479369 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" podStartSLOduration=9.68938547 podStartE2EDuration="13.479339176s" podCreationTimestamp="2026-02-16 20:55:39 +0000 UTC" firstStartedPulling="2026-02-16 20:55:48.11202376 +0000 UTC m=+44.041949818" lastFinishedPulling="2026-02-16 20:55:51.901977506 +0000 UTC m=+47.831903524" observedRunningTime="2026-02-16 20:55:52.463309364 +0000 UTC m=+48.393235422" watchObservedRunningTime="2026-02-16 20:55:52.479339176 +0000 UTC m=+48.409265224" Feb 16 20:55:54.786075 master-0 kubenswrapper[4119]: I0216 20:55:54.786018 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-zmqd7"] Feb 16 20:55:54.787430 master-0 kubenswrapper[4119]: I0216 20:55:54.787091 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zmqd7" Feb 16 20:55:54.870152 master-0 kubenswrapper[4119]: I0216 20:55:54.870107 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phrd4\" (UniqueName: \"kubernetes.io/projected/cc1d7efb-93cd-4f49-ace0-2144532cae9e-kube-api-access-phrd4\") pod \"mtu-prober-zmqd7\" (UID: \"cc1d7efb-93cd-4f49-ace0-2144532cae9e\") " pod="openshift-network-operator/mtu-prober-zmqd7" Feb 16 20:55:54.894808 master-0 kubenswrapper[4119]: I0216 20:55:54.894304 4119 scope.go:117] "RemoveContainer" containerID="34cedb032f29de87a57c244cfdac89c6368a83bd489ea19dfd7e57624682d8a7" Feb 16 20:55:54.894808 master-0 kubenswrapper[4119]: I0216 20:55:54.894446 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 16 20:55:54.970957 master-0 kubenswrapper[4119]: I0216 20:55:54.970804 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phrd4\" (UniqueName: \"kubernetes.io/projected/cc1d7efb-93cd-4f49-ace0-2144532cae9e-kube-api-access-phrd4\") pod \"mtu-prober-zmqd7\" (UID: \"cc1d7efb-93cd-4f49-ace0-2144532cae9e\") " pod="openshift-network-operator/mtu-prober-zmqd7" Feb 16 20:55:54.989692 master-0 kubenswrapper[4119]: I0216 20:55:54.989580 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phrd4\" (UniqueName: \"kubernetes.io/projected/cc1d7efb-93cd-4f49-ace0-2144532cae9e-kube-api-access-phrd4\") pod \"mtu-prober-zmqd7\" (UID: \"cc1d7efb-93cd-4f49-ace0-2144532cae9e\") " pod="openshift-network-operator/mtu-prober-zmqd7" Feb 16 20:55:55.098212 master-0 kubenswrapper[4119]: I0216 20:55:55.098068 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zmqd7" Feb 16 20:55:55.113621 master-0 kubenswrapper[4119]: W0216 20:55:55.113529 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc1d7efb_93cd_4f49_ace0_2144532cae9e.slice/crio-27cab70a8212b927f3a896bd289c92658d551d4d6062085d094a691e761d0282 WatchSource:0}: Error finding container 27cab70a8212b927f3a896bd289c92658d551d4d6062085d094a691e761d0282: Status 404 returned error can't find the container with id 27cab70a8212b927f3a896bd289c92658d551d4d6062085d094a691e761d0282 Feb 16 20:55:55.450678 master-0 kubenswrapper[4119]: I0216 20:55:55.450586 4119 generic.go:334] "Generic (PLEG): container finished" podID="cc1d7efb-93cd-4f49-ace0-2144532cae9e" containerID="ffb676f67b4284795ed9016656d43ca3b8d0c5d83ea808c4b84c0f1bccf3bdd0" exitCode=0 Feb 16 20:55:55.450777 master-0 kubenswrapper[4119]: I0216 20:55:55.450721 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-zmqd7" event={"ID":"cc1d7efb-93cd-4f49-ace0-2144532cae9e","Type":"ContainerDied","Data":"ffb676f67b4284795ed9016656d43ca3b8d0c5d83ea808c4b84c0f1bccf3bdd0"} Feb 16 20:55:55.450842 master-0 kubenswrapper[4119]: I0216 20:55:55.450807 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-zmqd7" event={"ID":"cc1d7efb-93cd-4f49-ace0-2144532cae9e","Type":"ContainerStarted","Data":"27cab70a8212b927f3a896bd289c92658d551d4d6062085d094a691e761d0282"} Feb 16 20:55:55.453444 master-0 kubenswrapper[4119]: I0216 20:55:55.453400 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 20:55:55.455055 master-0 kubenswrapper[4119]: I0216 20:55:55.454994 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"9ea7eb4c5b7177a7e2ac3c5dca26fbf5f811d30a8d29e8b826572146fe10d264"} Feb 16 20:55:55.458145 master-0 kubenswrapper[4119]: I0216 20:55:55.458104 4119 generic.go:334] "Generic (PLEG): container finished" podID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerID="fa302e5e493b2dfa58bae20f0ca7e4cc187d6d95bf769b99faf796dd889e114f" exitCode=0 Feb 16 20:55:55.458211 master-0 kubenswrapper[4119]: I0216 20:55:55.458153 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-6llwf" event={"ID":"700bc24c-4b00-44f0-90b0-aa555fe5c7a8","Type":"ContainerDied","Data":"fa302e5e493b2dfa58bae20f0ca7e4cc187d6d95bf769b99faf796dd889e114f"} Feb 16 20:55:55.508004 master-0 kubenswrapper[4119]: I0216 20:55:55.507883 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.50784629 podStartE2EDuration="1.50784629s" podCreationTimestamp="2026-02-16 20:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:55:55.507600544 +0000 UTC m=+51.437526562" watchObservedRunningTime="2026-02-16 20:55:55.50784629 +0000 UTC m=+51.437772338" Feb 16 20:55:55.575455 master-0 kubenswrapper[4119]: I0216 20:55:55.575209 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:55:55.575809 master-0 kubenswrapper[4119]: E0216 20:55:55.575459 4119 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:55:55.575809 master-0 kubenswrapper[4119]: E0216 20:55:55.575556 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:56:03.575525143 +0000 UTC m=+59.505451202 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:55:56.496255 master-0 kubenswrapper[4119]: I0216 20:55:56.495577 4119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zmqd7" Feb 16 20:55:56.505704 master-0 kubenswrapper[4119]: I0216 20:55:56.504459 4119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:56.684220 master-0 kubenswrapper[4119]: I0216 20:55:56.684114 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-resolv-conf\") pod \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " Feb 16 20:55:56.684220 master-0 kubenswrapper[4119]: I0216 20:55:56.684210 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-var-run-resolv-conf\") pod \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " Feb 16 20:55:56.684539 master-0 kubenswrapper[4119]: I0216 20:55:56.684272 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5td42\" (UniqueName: \"kubernetes.io/projected/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-kube-api-access-5td42\") pod \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " Feb 16 20:55:56.684539 master-0 kubenswrapper[4119]: I0216 20:55:56.684310 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-sno-bootstrap-files\") pod \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " Feb 16 20:55:56.684539 master-0 kubenswrapper[4119]: I0216 20:55:56.684302 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "700bc24c-4b00-44f0-90b0-aa555fe5c7a8" (UID: "700bc24c-4b00-44f0-90b0-aa555fe5c7a8"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:55:56.684539 master-0 kubenswrapper[4119]: I0216 20:55:56.684340 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-ca-bundle\") pod \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\" (UID: \"700bc24c-4b00-44f0-90b0-aa555fe5c7a8\") " Feb 16 20:55:56.684539 master-0 kubenswrapper[4119]: I0216 20:55:56.684416 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "700bc24c-4b00-44f0-90b0-aa555fe5c7a8" (UID: "700bc24c-4b00-44f0-90b0-aa555fe5c7a8"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:55:56.684539 master-0 kubenswrapper[4119]: I0216 20:55:56.684405 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "700bc24c-4b00-44f0-90b0-aa555fe5c7a8" (UID: "700bc24c-4b00-44f0-90b0-aa555fe5c7a8"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:55:56.684539 master-0 kubenswrapper[4119]: I0216 20:55:56.684458 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phrd4\" (UniqueName: \"kubernetes.io/projected/cc1d7efb-93cd-4f49-ace0-2144532cae9e-kube-api-access-phrd4\") pod \"cc1d7efb-93cd-4f49-ace0-2144532cae9e\" (UID: \"cc1d7efb-93cd-4f49-ace0-2144532cae9e\") " Feb 16 20:55:56.684865 master-0 kubenswrapper[4119]: I0216 20:55:56.684521 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "700bc24c-4b00-44f0-90b0-aa555fe5c7a8" (UID: "700bc24c-4b00-44f0-90b0-aa555fe5c7a8"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:55:56.684865 master-0 kubenswrapper[4119]: I0216 20:55:56.684752 4119 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Feb 16 20:55:56.684865 master-0 kubenswrapper[4119]: I0216 20:55:56.684783 4119 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 20:55:56.684865 master-0 kubenswrapper[4119]: I0216 20:55:56.684804 4119 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 16 20:55:56.684865 master-0 kubenswrapper[4119]: I0216 20:55:56.684829 4119 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 16 20:55:56.690123 master-0 kubenswrapper[4119]: I0216 20:55:56.690060 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc1d7efb-93cd-4f49-ace0-2144532cae9e-kube-api-access-phrd4" (OuterVolumeSpecName: "kube-api-access-phrd4") pod "cc1d7efb-93cd-4f49-ace0-2144532cae9e" (UID: "cc1d7efb-93cd-4f49-ace0-2144532cae9e"). InnerVolumeSpecName "kube-api-access-phrd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:55:56.690408 master-0 kubenswrapper[4119]: I0216 20:55:56.690344 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-kube-api-access-5td42" (OuterVolumeSpecName: "kube-api-access-5td42") pod "700bc24c-4b00-44f0-90b0-aa555fe5c7a8" (UID: "700bc24c-4b00-44f0-90b0-aa555fe5c7a8"). InnerVolumeSpecName "kube-api-access-5td42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:55:56.785668 master-0 kubenswrapper[4119]: I0216 20:55:56.785605 4119 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phrd4\" (UniqueName: \"kubernetes.io/projected/cc1d7efb-93cd-4f49-ace0-2144532cae9e-kube-api-access-phrd4\") on node \"master-0\" DevicePath \"\"" Feb 16 20:55:56.785668 master-0 kubenswrapper[4119]: I0216 20:55:56.785667 4119 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5td42\" (UniqueName: \"kubernetes.io/projected/700bc24c-4b00-44f0-90b0-aa555fe5c7a8-kube-api-access-5td42\") on node \"master-0\" DevicePath \"\"" Feb 16 20:55:57.464973 master-0 kubenswrapper[4119]: I0216 20:55:57.464626 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-zmqd7" event={"ID":"cc1d7efb-93cd-4f49-ace0-2144532cae9e","Type":"ContainerDied","Data":"27cab70a8212b927f3a896bd289c92658d551d4d6062085d094a691e761d0282"} Feb 16 20:55:57.464973 master-0 kubenswrapper[4119]: I0216 20:55:57.464946 4119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27cab70a8212b927f3a896bd289c92658d551d4d6062085d094a691e761d0282" Feb 16 20:55:57.464973 master-0 kubenswrapper[4119]: I0216 20:55:57.464755 4119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-zmqd7" Feb 16 20:55:57.466363 master-0 kubenswrapper[4119]: I0216 20:55:57.466288 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-6llwf" event={"ID":"700bc24c-4b00-44f0-90b0-aa555fe5c7a8","Type":"ContainerDied","Data":"2e5b179a0033062cd2b178034bcb5784ab1edcaef771f5cac5fd7b9ba67359d1"} Feb 16 20:55:57.466363 master-0 kubenswrapper[4119]: I0216 20:55:57.466319 4119 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5b179a0033062cd2b178034bcb5784ab1edcaef771f5cac5fd7b9ba67359d1" Feb 16 20:55:57.466497 master-0 kubenswrapper[4119]: I0216 20:55:57.466454 4119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:55:59.783979 master-0 kubenswrapper[4119]: I0216 20:55:59.783914 4119 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-zmqd7"] Feb 16 20:55:59.789844 master-0 kubenswrapper[4119]: I0216 20:55:59.789752 4119 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-zmqd7"] Feb 16 20:56:00.879507 master-0 kubenswrapper[4119]: I0216 20:56:00.879394 4119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc1d7efb-93cd-4f49-ace0-2144532cae9e" path="/var/lib/kubelet/pods/cc1d7efb-93cd-4f49-ace0-2144532cae9e/volumes" Feb 16 20:56:03.640628 master-0 kubenswrapper[4119]: I0216 20:56:03.640544 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:56:03.641849 master-0 kubenswrapper[4119]: E0216 20:56:03.640859 4119 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:56:03.641849 master-0 kubenswrapper[4119]: E0216 20:56:03.641450 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:56:19.641393442 +0000 UTC m=+75.571319490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:56:04.671726 master-0 kubenswrapper[4119]: I0216 20:56:04.671585 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-65zz6"] Feb 16 20:56:04.672799 master-0 kubenswrapper[4119]: E0216 20:56:04.671801 4119 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerName="assisted-installer-controller" Feb 16 20:56:04.672799 master-0 kubenswrapper[4119]: I0216 20:56:04.671836 4119 state_mem.go:107] "Deleted CPUSet assignment" podUID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerName="assisted-installer-controller" Feb 16 20:56:04.672799 master-0 kubenswrapper[4119]: E0216 20:56:04.671862 4119 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1d7efb-93cd-4f49-ace0-2144532cae9e" containerName="prober" Feb 16 20:56:04.672799 master-0 kubenswrapper[4119]: I0216 20:56:04.671880 4119 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1d7efb-93cd-4f49-ace0-2144532cae9e" containerName="prober" Feb 16 20:56:04.672799 master-0 kubenswrapper[4119]: I0216 20:56:04.671940 4119 memory_manager.go:354] "RemoveStaleState removing state" podUID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerName="assisted-installer-controller" Feb 16 20:56:04.672799 master-0 kubenswrapper[4119]: I0216 20:56:04.671963 4119 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1d7efb-93cd-4f49-ace0-2144532cae9e" containerName="prober" Feb 16 20:56:04.672799 master-0 kubenswrapper[4119]: I0216 20:56:04.672382 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.675784 master-0 kubenswrapper[4119]: I0216 20:56:04.675707 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 20:56:04.675784 master-0 kubenswrapper[4119]: I0216 20:56:04.675767 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 20:56:04.675989 master-0 kubenswrapper[4119]: I0216 20:56:04.675777 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 20:56:04.675989 master-0 kubenswrapper[4119]: I0216 20:56:04.675978 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 20:56:04.749731 master-0 kubenswrapper[4119]: I0216 20:56:04.749602 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-netns\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.749731 master-0 kubenswrapper[4119]: I0216 20:56:04.749735 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-system-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.749989 master-0 kubenswrapper[4119]: I0216 20:56:04.749778 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cnibin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.749989 master-0 kubenswrapper[4119]: I0216 20:56:04.749817 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-etc-kubernetes\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.749989 master-0 kubenswrapper[4119]: I0216 20:56:04.749932 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cni-binary-copy\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750105 master-0 kubenswrapper[4119]: I0216 20:56:04.750010 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-bin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750105 master-0 kubenswrapper[4119]: I0216 20:56:04.750047 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-kubelet\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750105 master-0 kubenswrapper[4119]: I0216 20:56:04.750083 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmvtk\" (UniqueName: \"kubernetes.io/projected/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-kube-api-access-zmvtk\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750211 master-0 kubenswrapper[4119]: I0216 20:56:04.750120 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750255 master-0 kubenswrapper[4119]: I0216 20:56:04.750198 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-k8s-cni-cncf-io\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750603 master-0 kubenswrapper[4119]: I0216 20:56:04.750247 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-hostroot\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750603 master-0 kubenswrapper[4119]: I0216 20:56:04.750287 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-daemon-config\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750603 master-0 kubenswrapper[4119]: I0216 20:56:04.750324 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-multus-certs\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750603 master-0 kubenswrapper[4119]: I0216 20:56:04.750363 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-conf-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750803 master-0 kubenswrapper[4119]: I0216 20:56:04.750546 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-os-release\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750803 master-0 kubenswrapper[4119]: I0216 20:56:04.750709 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-socket-dir-parent\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.750803 master-0 kubenswrapper[4119]: I0216 20:56:04.750773 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-multus\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.851606 master-0 kubenswrapper[4119]: I0216 20:56:04.851511 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-k8s-cni-cncf-io\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.852547 master-0 kubenswrapper[4119]: I0216 20:56:04.852502 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-kubelet\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.852834 master-0 kubenswrapper[4119]: I0216 20:56:04.852800 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmvtk\" (UniqueName: \"kubernetes.io/projected/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-kube-api-access-zmvtk\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.853063 master-0 kubenswrapper[4119]: I0216 20:56:04.853030 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.853522 master-0 kubenswrapper[4119]: I0216 20:56:04.853414 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.854049 master-0 kubenswrapper[4119]: I0216 20:56:04.852789 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-k8s-cni-cncf-io\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.854160 master-0 kubenswrapper[4119]: I0216 20:56:04.853494 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-daemon-config\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.854318 master-0 kubenswrapper[4119]: I0216 20:56:04.854289 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-multus-certs\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.854465 master-0 kubenswrapper[4119]: I0216 20:56:04.854446 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-hostroot\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.854559 master-0 kubenswrapper[4119]: I0216 20:56:04.854537 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-hostroot\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.854685 master-0 kubenswrapper[4119]: I0216 20:56:04.852861 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-kubelet\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.854795 master-0 kubenswrapper[4119]: I0216 20:56:04.854569 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-conf-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.854909 master-0 kubenswrapper[4119]: I0216 20:56:04.854892 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-socket-dir-parent\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.854976 master-0 kubenswrapper[4119]: I0216 20:56:04.854392 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-multus-certs\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855040 master-0 kubenswrapper[4119]: I0216 20:56:04.854601 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-conf-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855117 master-0 kubenswrapper[4119]: I0216 20:56:04.855074 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-socket-dir-parent\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855173 master-0 kubenswrapper[4119]: I0216 20:56:04.855100 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-multus\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855173 master-0 kubenswrapper[4119]: I0216 20:56:04.855133 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-multus\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855173 master-0 kubenswrapper[4119]: I0216 20:56:04.855164 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-os-release\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855263 master-0 kubenswrapper[4119]: I0216 20:56:04.855193 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-netns\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855263 master-0 kubenswrapper[4119]: I0216 20:56:04.855227 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-system-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855263 master-0 kubenswrapper[4119]: I0216 20:56:04.855245 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cnibin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855263 master-0 kubenswrapper[4119]: I0216 20:56:04.855261 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-etc-kubernetes\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855423 master-0 kubenswrapper[4119]: I0216 20:56:04.855276 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-bin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855423 master-0 kubenswrapper[4119]: I0216 20:56:04.855298 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cni-binary-copy\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855423 master-0 kubenswrapper[4119]: I0216 20:56:04.855309 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-os-release\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855423 master-0 kubenswrapper[4119]: I0216 20:56:04.855386 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-etc-kubernetes\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855423 master-0 kubenswrapper[4119]: I0216 20:56:04.855423 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cnibin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855588 master-0 kubenswrapper[4119]: I0216 20:56:04.855427 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-bin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855588 master-0 kubenswrapper[4119]: I0216 20:56:04.855488 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-netns\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.855588 master-0 kubenswrapper[4119]: I0216 20:56:04.855533 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-system-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.856498 master-0 kubenswrapper[4119]: I0216 20:56:04.856097 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 20:56:04.856766 master-0 kubenswrapper[4119]: I0216 20:56:04.856713 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 20:56:04.864299 master-0 kubenswrapper[4119]: I0216 20:56:04.864245 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8zsx4"] Feb 16 20:56:04.865054 master-0 kubenswrapper[4119]: I0216 20:56:04.864842 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-daemon-config\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.865054 master-0 kubenswrapper[4119]: I0216 20:56:04.864874 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:04.866170 master-0 kubenswrapper[4119]: I0216 20:56:04.866035 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cni-binary-copy\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.867864 master-0 kubenswrapper[4119]: I0216 20:56:04.867801 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 20:56:04.868302 master-0 kubenswrapper[4119]: I0216 20:56:04.868225 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 20:56:04.869531 master-0 kubenswrapper[4119]: I0216 20:56:04.869465 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 20:56:04.878833 master-0 kubenswrapper[4119]: I0216 20:56:04.878794 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 20:56:04.892746 master-0 kubenswrapper[4119]: I0216 20:56:04.892665 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmvtk\" (UniqueName: \"kubernetes.io/projected/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-kube-api-access-zmvtk\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:56:04.956535 master-0 kubenswrapper[4119]: I0216 20:56:04.956364 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-binary-copy\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:04.956535 master-0 kubenswrapper[4119]: I0216 20:56:04.956419 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-cnibin\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:04.956535 master-0 kubenswrapper[4119]: I0216 20:56:04.956445 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-system-cni-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:04.956535 master-0 kubenswrapper[4119]: I0216 20:56:04.956469 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:04.956942 master-0 kubenswrapper[4119]: I0216 20:56:04.956599 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sq4t\" (UniqueName: \"kubernetes.io/projected/62935559-041f-4694-9d36-adc809d079b4-kube-api-access-6sq4t\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:04.956942 master-0 kubenswrapper[4119]: I0216 20:56:04.956738 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:04.956942 master-0 kubenswrapper[4119]: I0216 20:56:04.956778 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-os-release\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:04.956942 master-0 kubenswrapper[4119]: I0216 20:56:04.956819 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-whereabouts-configmap\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:04.996584 master-0 kubenswrapper[4119]: I0216 20:56:04.996490 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-65zz6" Feb 16 20:56:05.012106 master-0 kubenswrapper[4119]: W0216 20:56:05.012024 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb27e0202_8bdb_4a36_8c3e_0c203f7665b8.slice/crio-c8c3670530b0c671383aade45325850e12f9fcf9f76178c2929f043d5a9b72a3 WatchSource:0}: Error finding container c8c3670530b0c671383aade45325850e12f9fcf9f76178c2929f043d5a9b72a3: Status 404 returned error can't find the container with id c8c3670530b0c671383aade45325850e12f9fcf9f76178c2929f043d5a9b72a3 Feb 16 20:56:05.057730 master-0 kubenswrapper[4119]: I0216 20:56:05.057611 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-whereabouts-configmap\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058015 master-0 kubenswrapper[4119]: I0216 20:56:05.057820 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-binary-copy\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058015 master-0 kubenswrapper[4119]: I0216 20:56:05.057846 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-cnibin\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058015 master-0 kubenswrapper[4119]: I0216 20:56:05.057866 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-system-cni-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058149 master-0 kubenswrapper[4119]: I0216 20:56:05.058029 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058149 master-0 kubenswrapper[4119]: I0216 20:56:05.058086 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-system-cni-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058149 master-0 kubenswrapper[4119]: I0216 20:56:05.058089 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sq4t\" (UniqueName: \"kubernetes.io/projected/62935559-041f-4694-9d36-adc809d079b4-kube-api-access-6sq4t\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058305 master-0 kubenswrapper[4119]: I0216 20:56:05.058257 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058351 master-0 kubenswrapper[4119]: I0216 20:56:05.058325 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-os-release\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058428 master-0 kubenswrapper[4119]: I0216 20:56:05.058391 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-os-release\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058710 master-0 kubenswrapper[4119]: I0216 20:56:05.058630 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-binary-copy\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058710 master-0 kubenswrapper[4119]: I0216 20:56:05.058679 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-whereabouts-configmap\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.058896 master-0 kubenswrapper[4119]: I0216 20:56:05.058858 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.059073 master-0 kubenswrapper[4119]: I0216 20:56:05.059003 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-cnibin\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.059205 master-0 kubenswrapper[4119]: I0216 20:56:05.059171 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.078995 master-0 kubenswrapper[4119]: I0216 20:56:05.078930 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sq4t\" (UniqueName: \"kubernetes.io/projected/62935559-041f-4694-9d36-adc809d079b4-kube-api-access-6sq4t\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.182036 master-0 kubenswrapper[4119]: I0216 20:56:05.181930 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:56:05.490224 master-0 kubenswrapper[4119]: I0216 20:56:05.489959 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerStarted","Data":"0dfbee9f7528fe042540e180164336ecf2ece621fbebd18d9dde03c5a49a8d3a"} Feb 16 20:56:05.490597 master-0 kubenswrapper[4119]: I0216 20:56:05.490562 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65zz6" event={"ID":"b27e0202-8bdb-4a36-8c3e-0c203f7665b8","Type":"ContainerStarted","Data":"c8c3670530b0c671383aade45325850e12f9fcf9f76178c2929f043d5a9b72a3"} Feb 16 20:56:05.653580 master-0 kubenswrapper[4119]: I0216 20:56:05.653507 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-42bw7"] Feb 16 20:56:05.654071 master-0 kubenswrapper[4119]: I0216 20:56:05.654021 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:05.654165 master-0 kubenswrapper[4119]: E0216 20:56:05.654137 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:05.762830 master-0 kubenswrapper[4119]: I0216 20:56:05.762758 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:05.762830 master-0 kubenswrapper[4119]: I0216 20:56:05.762829 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59kpw\" (UniqueName: \"kubernetes.io/projected/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-kube-api-access-59kpw\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:05.864066 master-0 kubenswrapper[4119]: I0216 20:56:05.863979 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:05.864066 master-0 kubenswrapper[4119]: I0216 20:56:05.864050 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59kpw\" (UniqueName: \"kubernetes.io/projected/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-kube-api-access-59kpw\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:05.864333 master-0 kubenswrapper[4119]: E0216 20:56:05.864184 4119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:05.864333 master-0 kubenswrapper[4119]: E0216 20:56:05.864281 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:56:06.364257685 +0000 UTC m=+62.294183703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:05.883511 master-0 kubenswrapper[4119]: I0216 20:56:05.883454 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59kpw\" (UniqueName: \"kubernetes.io/projected/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-kube-api-access-59kpw\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:06.369778 master-0 kubenswrapper[4119]: I0216 20:56:06.369692 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:06.370021 master-0 kubenswrapper[4119]: E0216 20:56:06.369947 4119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:06.370105 master-0 kubenswrapper[4119]: E0216 20:56:06.370077 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:56:07.370051022 +0000 UTC m=+63.299977040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:06.874745 master-0 kubenswrapper[4119]: I0216 20:56:06.874683 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:06.875919 master-0 kubenswrapper[4119]: E0216 20:56:06.874829 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:07.378400 master-0 kubenswrapper[4119]: I0216 20:56:07.378316 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:07.378700 master-0 kubenswrapper[4119]: E0216 20:56:07.378483 4119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:07.378700 master-0 kubenswrapper[4119]: E0216 20:56:07.378552 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:56:09.378537088 +0000 UTC m=+65.308463106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:08.499376 master-0 kubenswrapper[4119]: I0216 20:56:08.499310 4119 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="2485cbe452aed6f7043c33dccc17caa48675a3e464f4b79370075f51c4973793" exitCode=0 Feb 16 20:56:08.499376 master-0 kubenswrapper[4119]: I0216 20:56:08.499359 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"2485cbe452aed6f7043c33dccc17caa48675a3e464f4b79370075f51c4973793"} Feb 16 20:56:08.875473 master-0 kubenswrapper[4119]: I0216 20:56:08.875323 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:08.875473 master-0 kubenswrapper[4119]: E0216 20:56:08.875466 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:09.398626 master-0 kubenswrapper[4119]: I0216 20:56:09.398523 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:09.398976 master-0 kubenswrapper[4119]: E0216 20:56:09.398751 4119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:09.398976 master-0 kubenswrapper[4119]: E0216 20:56:09.398856 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:56:13.398832603 +0000 UTC m=+69.328758621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:10.874608 master-0 kubenswrapper[4119]: I0216 20:56:10.874544 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:10.875176 master-0 kubenswrapper[4119]: E0216 20:56:10.874692 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:12.874954 master-0 kubenswrapper[4119]: I0216 20:56:12.874873 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:12.875501 master-0 kubenswrapper[4119]: E0216 20:56:12.875044 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:13.428956 master-0 kubenswrapper[4119]: I0216 20:56:13.428897 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:13.429170 master-0 kubenswrapper[4119]: E0216 20:56:13.429052 4119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:13.429170 master-0 kubenswrapper[4119]: E0216 20:56:13.429117 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:56:21.429097508 +0000 UTC m=+77.359023526 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:14.875170 master-0 kubenswrapper[4119]: I0216 20:56:14.875104 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:14.875777 master-0 kubenswrapper[4119]: E0216 20:56:14.875482 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:16.875026 master-0 kubenswrapper[4119]: I0216 20:56:16.874983 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:16.875675 master-0 kubenswrapper[4119]: E0216 20:56:16.875110 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:17.128305 master-0 kubenswrapper[4119]: I0216 20:56:17.125860 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd"] Feb 16 20:56:17.128305 master-0 kubenswrapper[4119]: I0216 20:56:17.126358 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.128305 master-0 kubenswrapper[4119]: I0216 20:56:17.128297 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 20:56:17.128812 master-0 kubenswrapper[4119]: I0216 20:56:17.128797 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 20:56:17.128877 master-0 kubenswrapper[4119]: I0216 20:56:17.128836 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 20:56:17.128967 master-0 kubenswrapper[4119]: I0216 20:56:17.128946 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 20:56:17.129689 master-0 kubenswrapper[4119]: I0216 20:56:17.129666 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 20:56:17.156546 master-0 kubenswrapper[4119]: I0216 20:56:17.156491 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.156734 master-0 kubenswrapper[4119]: I0216 20:56:17.156560 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6wng\" (UniqueName: \"kubernetes.io/projected/484154d0-66c8-4d0e-bf1b-f48d0abfe628-kube-api-access-b6wng\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.156734 master-0 kubenswrapper[4119]: I0216 20:56:17.156601 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.156879 master-0 kubenswrapper[4119]: I0216 20:56:17.156840 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.258175 master-0 kubenswrapper[4119]: I0216 20:56:17.258133 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.258175 master-0 kubenswrapper[4119]: I0216 20:56:17.258175 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.258456 master-0 kubenswrapper[4119]: I0216 20:56:17.258349 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.258984 master-0 kubenswrapper[4119]: I0216 20:56:17.258590 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6wng\" (UniqueName: \"kubernetes.io/projected/484154d0-66c8-4d0e-bf1b-f48d0abfe628-kube-api-access-b6wng\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.259152 master-0 kubenswrapper[4119]: I0216 20:56:17.259117 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.259192 master-0 kubenswrapper[4119]: I0216 20:56:17.259147 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.264173 master-0 kubenswrapper[4119]: I0216 20:56:17.264118 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.294163 master-0 kubenswrapper[4119]: I0216 20:56:17.294115 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6wng\" (UniqueName: \"kubernetes.io/projected/484154d0-66c8-4d0e-bf1b-f48d0abfe628-kube-api-access-b6wng\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.404854 master-0 kubenswrapper[4119]: I0216 20:56:17.404381 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lprkk"] Feb 16 20:56:17.405552 master-0 kubenswrapper[4119]: I0216 20:56:17.405520 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.407483 master-0 kubenswrapper[4119]: I0216 20:56:17.407294 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 20:56:17.408213 master-0 kubenswrapper[4119]: I0216 20:56:17.408168 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 20:56:17.443660 master-0 kubenswrapper[4119]: I0216 20:56:17.443560 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:56:17.460407 master-0 kubenswrapper[4119]: I0216 20:56:17.460361 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-kubelet\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460487 master-0 kubenswrapper[4119]: I0216 20:56:17.460419 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460487 master-0 kubenswrapper[4119]: I0216 20:56:17.460457 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-openvswitch\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460554 master-0 kubenswrapper[4119]: I0216 20:56:17.460514 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-slash\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460554 master-0 kubenswrapper[4119]: I0216 20:56:17.460537 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-netns\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460608 master-0 kubenswrapper[4119]: I0216 20:56:17.460560 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-systemd-units\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460608 master-0 kubenswrapper[4119]: I0216 20:56:17.460583 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovn-node-metrics-cert\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460683 master-0 kubenswrapper[4119]: I0216 20:56:17.460611 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-node-log\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460683 master-0 kubenswrapper[4119]: I0216 20:56:17.460633 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-ovn\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460749 master-0 kubenswrapper[4119]: I0216 20:56:17.460682 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-etc-openvswitch\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460749 master-0 kubenswrapper[4119]: I0216 20:56:17.460707 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-bin\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460749 master-0 kubenswrapper[4119]: I0216 20:56:17.460737 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-var-lib-openvswitch\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460927 master-0 kubenswrapper[4119]: I0216 20:56:17.460872 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-log-socket\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.460995 master-0 kubenswrapper[4119]: I0216 20:56:17.460968 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-ovn-kubernetes\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.461029 master-0 kubenswrapper[4119]: I0216 20:56:17.461007 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwhxb\" (UniqueName: \"kubernetes.io/projected/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-kube-api-access-cwhxb\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.461065 master-0 kubenswrapper[4119]: I0216 20:56:17.461044 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-config\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.461140 master-0 kubenswrapper[4119]: I0216 20:56:17.461110 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-script-lib\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.461179 master-0 kubenswrapper[4119]: I0216 20:56:17.461158 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-netd\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.461260 master-0 kubenswrapper[4119]: I0216 20:56:17.461234 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-systemd\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.461307 master-0 kubenswrapper[4119]: I0216 20:56:17.461270 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-env-overrides\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.562665 master-0 kubenswrapper[4119]: I0216 20:56:17.562549 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-openvswitch\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.562665 master-0 kubenswrapper[4119]: I0216 20:56:17.562597 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-slash\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.562665 master-0 kubenswrapper[4119]: I0216 20:56:17.562612 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-netns\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.562983 master-0 kubenswrapper[4119]: I0216 20:56:17.562739 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-openvswitch\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.562983 master-0 kubenswrapper[4119]: I0216 20:56:17.562895 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-systemd-units\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.563044 master-0 kubenswrapper[4119]: I0216 20:56:17.562998 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-netns\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.563044 master-0 kubenswrapper[4119]: I0216 20:56:17.563006 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovn-node-metrics-cert\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563069 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-node-log\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563096 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-ovn\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563103 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-slash\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563132 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-systemd-units\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563176 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-ovn\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563123 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-etc-openvswitch\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563272 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-etc-openvswitch\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563291 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-node-log\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563228 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-bin\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563333 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-var-lib-openvswitch\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563349 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-bin\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563382 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-log-socket\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563400 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-var-lib-openvswitch\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563403 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-ovn-kubernetes\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563493 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-ovn-kubernetes\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563552 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwhxb\" (UniqueName: \"kubernetes.io/projected/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-kube-api-access-cwhxb\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564012 master-0 kubenswrapper[4119]: I0216 20:56:17.563620 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-log-socket\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564808 master-0 kubenswrapper[4119]: I0216 20:56:17.564235 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-config\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564808 master-0 kubenswrapper[4119]: I0216 20:56:17.564314 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-script-lib\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564808 master-0 kubenswrapper[4119]: I0216 20:56:17.564357 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-netd\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.564929 master-0 kubenswrapper[4119]: I0216 20:56:17.564858 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-netd\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.565090 master-0 kubenswrapper[4119]: I0216 20:56:17.564968 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-systemd\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.565309 master-0 kubenswrapper[4119]: I0216 20:56:17.565174 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-env-overrides\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.565392 master-0 kubenswrapper[4119]: I0216 20:56:17.565356 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-kubelet\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.565502 master-0 kubenswrapper[4119]: I0216 20:56:17.565426 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.566301 master-0 kubenswrapper[4119]: I0216 20:56:17.566259 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.566301 master-0 kubenswrapper[4119]: I0216 20:56:17.565662 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-systemd\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.566663 master-0 kubenswrapper[4119]: I0216 20:56:17.566599 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-config\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.567489 master-0 kubenswrapper[4119]: I0216 20:56:17.567426 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-kubelet\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.567489 master-0 kubenswrapper[4119]: I0216 20:56:17.567435 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-script-lib\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.567693 master-0 kubenswrapper[4119]: I0216 20:56:17.567607 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-env-overrides\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.569103 master-0 kubenswrapper[4119]: I0216 20:56:17.569066 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovn-node-metrics-cert\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.589255 master-0 kubenswrapper[4119]: I0216 20:56:17.589166 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwhxb\" (UniqueName: \"kubernetes.io/projected/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-kube-api-access-cwhxb\") pod \"ovnkube-node-lprkk\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:17.718799 master-0 kubenswrapper[4119]: I0216 20:56:17.718590 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:18.874763 master-0 kubenswrapper[4119]: I0216 20:56:18.874692 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:18.875226 master-0 kubenswrapper[4119]: E0216 20:56:18.874836 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:19.525373 master-0 kubenswrapper[4119]: I0216 20:56:19.525308 4119 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="764147f0ae46dce8cfdba6d43c9720c0e223cc03d6732303325fb33cc0d7abd0" exitCode=0 Feb 16 20:56:19.525605 master-0 kubenswrapper[4119]: I0216 20:56:19.525381 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"764147f0ae46dce8cfdba6d43c9720c0e223cc03d6732303325fb33cc0d7abd0"} Feb 16 20:56:19.531201 master-0 kubenswrapper[4119]: I0216 20:56:19.530983 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" event={"ID":"484154d0-66c8-4d0e-bf1b-f48d0abfe628","Type":"ContainerStarted","Data":"886e279fd9c1934388e680cd4a0350ba2f292d514ac9e97bbae0f912d11a2b10"} Feb 16 20:56:19.531201 master-0 kubenswrapper[4119]: I0216 20:56:19.531023 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" event={"ID":"484154d0-66c8-4d0e-bf1b-f48d0abfe628","Type":"ContainerStarted","Data":"ebc8d1a24100c636c9029b0eba8d5b6521b906cdbb84675057a80b42a0273bbc"} Feb 16 20:56:19.533179 master-0 kubenswrapper[4119]: I0216 20:56:19.533123 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65zz6" event={"ID":"b27e0202-8bdb-4a36-8c3e-0c203f7665b8","Type":"ContainerStarted","Data":"911511d61b149b2a70f165a79454e8a52d97f53e4b9bed2f57b34efa4fd727a0"} Feb 16 20:56:19.534796 master-0 kubenswrapper[4119]: I0216 20:56:19.534751 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerStarted","Data":"b6dea92e798df20bff3cbf3fd8ad2002fbf085941657704cd0d299b16f8d448b"} Feb 16 20:56:19.571546 master-0 kubenswrapper[4119]: I0216 20:56:19.570986 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-65zz6" podStartSLOduration=1.340008513 podStartE2EDuration="15.570966371s" podCreationTimestamp="2026-02-16 20:56:04 +0000 UTC" firstStartedPulling="2026-02-16 20:56:05.015504914 +0000 UTC m=+60.945430942" lastFinishedPulling="2026-02-16 20:56:19.246462772 +0000 UTC m=+75.176388800" observedRunningTime="2026-02-16 20:56:19.570443536 +0000 UTC m=+75.500369554" watchObservedRunningTime="2026-02-16 20:56:19.570966371 +0000 UTC m=+75.500892389" Feb 16 20:56:19.699359 master-0 kubenswrapper[4119]: I0216 20:56:19.699282 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:56:19.699597 master-0 kubenswrapper[4119]: E0216 20:56:19.699480 4119 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:56:19.699597 master-0 kubenswrapper[4119]: E0216 20:56:19.699550 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:56:51.699532171 +0000 UTC m=+107.629458189 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:56:20.249530 master-0 kubenswrapper[4119]: I0216 20:56:20.249474 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-68c25"] Feb 16 20:56:20.250465 master-0 kubenswrapper[4119]: I0216 20:56:20.249809 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:20.250465 master-0 kubenswrapper[4119]: E0216 20:56:20.249866 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:20.306124 master-0 kubenswrapper[4119]: I0216 20:56:20.306041 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:20.407305 master-0 kubenswrapper[4119]: I0216 20:56:20.407244 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:20.421586 master-0 kubenswrapper[4119]: E0216 20:56:20.421535 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:20.421586 master-0 kubenswrapper[4119]: E0216 20:56:20.421575 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:20.421586 master-0 kubenswrapper[4119]: E0216 20:56:20.421589 4119 projected.go:194] Error preparing data for projected volume kube-api-access-kcp5t for pod openshift-network-diagnostics/network-check-target-68c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:20.421875 master-0 kubenswrapper[4119]: E0216 20:56:20.421684 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t podName:0d903d23-8e0b-424b-bcd0-e0a00f306e49 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:20.92163924 +0000 UTC m=+76.851565258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kcp5t" (UniqueName: "kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t") pod "network-check-target-68c25" (UID: "0d903d23-8e0b-424b-bcd0-e0a00f306e49") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:20.899244 master-0 kubenswrapper[4119]: I0216 20:56:20.899122 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:20.899558 master-0 kubenswrapper[4119]: E0216 20:56:20.899352 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:21.013368 master-0 kubenswrapper[4119]: I0216 20:56:21.013311 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:21.013709 master-0 kubenswrapper[4119]: E0216 20:56:21.013686 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:21.013763 master-0 kubenswrapper[4119]: E0216 20:56:21.013723 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:21.013763 master-0 kubenswrapper[4119]: E0216 20:56:21.013738 4119 projected.go:194] Error preparing data for projected volume kube-api-access-kcp5t for pod openshift-network-diagnostics/network-check-target-68c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:21.013826 master-0 kubenswrapper[4119]: E0216 20:56:21.013798 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t podName:0d903d23-8e0b-424b-bcd0-e0a00f306e49 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:22.013779872 +0000 UTC m=+77.943705890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kcp5t" (UniqueName: "kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t") pod "network-check-target-68c25" (UID: "0d903d23-8e0b-424b-bcd0-e0a00f306e49") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:21.521152 master-0 kubenswrapper[4119]: I0216 20:56:21.521029 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:21.521698 master-0 kubenswrapper[4119]: E0216 20:56:21.521191 4119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:21.521698 master-0 kubenswrapper[4119]: E0216 20:56:21.521250 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:56:37.521231281 +0000 UTC m=+93.451157299 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:21.875169 master-0 kubenswrapper[4119]: I0216 20:56:21.875041 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:21.875393 master-0 kubenswrapper[4119]: E0216 20:56:21.875167 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:22.025243 master-0 kubenswrapper[4119]: I0216 20:56:22.025170 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:22.025469 master-0 kubenswrapper[4119]: E0216 20:56:22.025379 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:22.025469 master-0 kubenswrapper[4119]: E0216 20:56:22.025399 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:22.025469 master-0 kubenswrapper[4119]: E0216 20:56:22.025411 4119 projected.go:194] Error preparing data for projected volume kube-api-access-kcp5t for pod openshift-network-diagnostics/network-check-target-68c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:22.025616 master-0 kubenswrapper[4119]: E0216 20:56:22.025476 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t podName:0d903d23-8e0b-424b-bcd0-e0a00f306e49 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:24.02545625 +0000 UTC m=+79.955382268 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kcp5t" (UniqueName: "kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t") pod "network-check-target-68c25" (UID: "0d903d23-8e0b-424b-bcd0-e0a00f306e49") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:22.874624 master-0 kubenswrapper[4119]: I0216 20:56:22.874574 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:22.875157 master-0 kubenswrapper[4119]: E0216 20:56:22.874734 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:23.040841 master-0 kubenswrapper[4119]: I0216 20:56:23.039804 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-tpj6f"] Feb 16 20:56:23.040841 master-0 kubenswrapper[4119]: I0216 20:56:23.040335 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.042362 master-0 kubenswrapper[4119]: I0216 20:56:23.042319 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 20:56:23.043099 master-0 kubenswrapper[4119]: I0216 20:56:23.042858 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 20:56:23.043099 master-0 kubenswrapper[4119]: I0216 20:56:23.042888 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 20:56:23.043254 master-0 kubenswrapper[4119]: I0216 20:56:23.043234 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 20:56:23.044213 master-0 kubenswrapper[4119]: I0216 20:56:23.043801 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 20:56:23.135436 master-0 kubenswrapper[4119]: I0216 20:56:23.135038 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.135436 master-0 kubenswrapper[4119]: I0216 20:56:23.135087 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-env-overrides\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.135436 master-0 kubenswrapper[4119]: I0216 20:56:23.135219 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-ovnkube-identity-cm\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.135436 master-0 kubenswrapper[4119]: I0216 20:56:23.135270 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x85fb\" (UniqueName: \"kubernetes.io/projected/88f19cea-60ed-4977-a906-75deec51fc3d-kube-api-access-x85fb\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.239007 master-0 kubenswrapper[4119]: I0216 20:56:23.238722 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.239007 master-0 kubenswrapper[4119]: I0216 20:56:23.238837 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-env-overrides\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.239007 master-0 kubenswrapper[4119]: E0216 20:56:23.238880 4119 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Feb 16 20:56:23.239007 master-0 kubenswrapper[4119]: E0216 20:56:23.238948 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert podName:88f19cea-60ed-4977-a906-75deec51fc3d nodeName:}" failed. No retries permitted until 2026-02-16 20:56:23.738922773 +0000 UTC m=+79.668848791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert") pod "network-node-identity-tpj6f" (UID: "88f19cea-60ed-4977-a906-75deec51fc3d") : secret "network-node-identity-cert" not found Feb 16 20:56:23.239173 master-0 kubenswrapper[4119]: I0216 20:56:23.239060 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-ovnkube-identity-cm\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.239243 master-0 kubenswrapper[4119]: I0216 20:56:23.239218 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x85fb\" (UniqueName: \"kubernetes.io/projected/88f19cea-60ed-4977-a906-75deec51fc3d-kube-api-access-x85fb\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.239975 master-0 kubenswrapper[4119]: I0216 20:56:23.239940 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-env-overrides\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.240117 master-0 kubenswrapper[4119]: I0216 20:56:23.240082 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-ovnkube-identity-cm\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.258032 master-0 kubenswrapper[4119]: I0216 20:56:23.257958 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x85fb\" (UniqueName: \"kubernetes.io/projected/88f19cea-60ed-4977-a906-75deec51fc3d-kube-api-access-x85fb\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.545856 master-0 kubenswrapper[4119]: I0216 20:56:23.545751 4119 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="181fe628d311f1cd1061bd5a4ed240a9f0bc9297d01fb093f8d0beb40911a4e0" exitCode=0 Feb 16 20:56:23.545856 master-0 kubenswrapper[4119]: I0216 20:56:23.545794 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"181fe628d311f1cd1061bd5a4ed240a9f0bc9297d01fb093f8d0beb40911a4e0"} Feb 16 20:56:23.743796 master-0 kubenswrapper[4119]: I0216 20:56:23.743728 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.748311 master-0 kubenswrapper[4119]: I0216 20:56:23.748269 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.875116 master-0 kubenswrapper[4119]: I0216 20:56:23.875001 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:23.875116 master-0 kubenswrapper[4119]: E0216 20:56:23.875094 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:23.954308 master-0 kubenswrapper[4119]: I0216 20:56:23.954240 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:56:23.964469 master-0 kubenswrapper[4119]: W0216 20:56:23.964407 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88f19cea_60ed_4977_a906_75deec51fc3d.slice/crio-76e543cc5345eb5c53417c9f0b565400b03593c03aa3a1637483c029bb868ef3 WatchSource:0}: Error finding container 76e543cc5345eb5c53417c9f0b565400b03593c03aa3a1637483c029bb868ef3: Status 404 returned error can't find the container with id 76e543cc5345eb5c53417c9f0b565400b03593c03aa3a1637483c029bb868ef3 Feb 16 20:56:24.046809 master-0 kubenswrapper[4119]: I0216 20:56:24.046727 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:24.047177 master-0 kubenswrapper[4119]: E0216 20:56:24.046986 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:24.047177 master-0 kubenswrapper[4119]: E0216 20:56:24.047046 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:24.047177 master-0 kubenswrapper[4119]: E0216 20:56:24.047060 4119 projected.go:194] Error preparing data for projected volume kube-api-access-kcp5t for pod openshift-network-diagnostics/network-check-target-68c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:24.047177 master-0 kubenswrapper[4119]: E0216 20:56:24.047137 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t podName:0d903d23-8e0b-424b-bcd0-e0a00f306e49 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:28.047119999 +0000 UTC m=+83.977046017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-kcp5t" (UniqueName: "kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t") pod "network-check-target-68c25" (UID: "0d903d23-8e0b-424b-bcd0-e0a00f306e49") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:24.550561 master-0 kubenswrapper[4119]: I0216 20:56:24.550463 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-tpj6f" event={"ID":"88f19cea-60ed-4977-a906-75deec51fc3d","Type":"ContainerStarted","Data":"76e543cc5345eb5c53417c9f0b565400b03593c03aa3a1637483c029bb868ef3"} Feb 16 20:56:24.875343 master-0 kubenswrapper[4119]: I0216 20:56:24.875205 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:24.876181 master-0 kubenswrapper[4119]: E0216 20:56:24.876146 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:25.009724 master-0 kubenswrapper[4119]: W0216 20:56:25.009588 4119 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 16 20:56:25.010612 master-0 kubenswrapper[4119]: I0216 20:56:25.010567 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 20:56:25.555353 master-0 kubenswrapper[4119]: I0216 20:56:25.554869 4119 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="4c7a7e08f576cfd5e11632a9ba0076da03fa44265bff3bddab5c897154cfdd10" exitCode=0 Feb 16 20:56:25.555353 master-0 kubenswrapper[4119]: I0216 20:56:25.554951 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"4c7a7e08f576cfd5e11632a9ba0076da03fa44265bff3bddab5c897154cfdd10"} Feb 16 20:56:25.874980 master-0 kubenswrapper[4119]: I0216 20:56:25.874551 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:25.874980 master-0 kubenswrapper[4119]: E0216 20:56:25.874786 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:26.874891 master-0 kubenswrapper[4119]: I0216 20:56:26.874771 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:26.875464 master-0 kubenswrapper[4119]: E0216 20:56:26.874931 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:27.874853 master-0 kubenswrapper[4119]: I0216 20:56:27.874765 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:27.875117 master-0 kubenswrapper[4119]: E0216 20:56:27.874894 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:28.083404 master-0 kubenswrapper[4119]: I0216 20:56:28.083355 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:28.083605 master-0 kubenswrapper[4119]: E0216 20:56:28.083491 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:28.083605 master-0 kubenswrapper[4119]: E0216 20:56:28.083509 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:28.083605 master-0 kubenswrapper[4119]: E0216 20:56:28.083521 4119 projected.go:194] Error preparing data for projected volume kube-api-access-kcp5t for pod openshift-network-diagnostics/network-check-target-68c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:28.083605 master-0 kubenswrapper[4119]: E0216 20:56:28.083574 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t podName:0d903d23-8e0b-424b-bcd0-e0a00f306e49 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:36.083558704 +0000 UTC m=+92.013484732 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-kcp5t" (UniqueName: "kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t") pod "network-check-target-68c25" (UID: "0d903d23-8e0b-424b-bcd0-e0a00f306e49") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:28.875051 master-0 kubenswrapper[4119]: I0216 20:56:28.874979 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:28.875292 master-0 kubenswrapper[4119]: E0216 20:56:28.875149 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:29.875336 master-0 kubenswrapper[4119]: I0216 20:56:29.875246 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:29.875874 master-0 kubenswrapper[4119]: E0216 20:56:29.875483 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:30.012482 master-0 kubenswrapper[4119]: I0216 20:56:30.012347 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=5.012314442 podStartE2EDuration="5.012314442s" podCreationTimestamp="2026-02-16 20:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:56:25.584571059 +0000 UTC m=+81.514497067" watchObservedRunningTime="2026-02-16 20:56:30.012314442 +0000 UTC m=+85.942240500" Feb 16 20:56:30.013890 master-0 kubenswrapper[4119]: I0216 20:56:30.013844 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 16 20:56:30.875050 master-0 kubenswrapper[4119]: I0216 20:56:30.874957 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:30.875606 master-0 kubenswrapper[4119]: E0216 20:56:30.875090 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:31.874922 master-0 kubenswrapper[4119]: I0216 20:56:31.874869 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:31.875164 master-0 kubenswrapper[4119]: E0216 20:56:31.875002 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:32.874895 master-0 kubenswrapper[4119]: I0216 20:56:32.874827 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:32.875384 master-0 kubenswrapper[4119]: E0216 20:56:32.875009 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:33.875271 master-0 kubenswrapper[4119]: I0216 20:56:33.875222 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:33.875882 master-0 kubenswrapper[4119]: E0216 20:56:33.875382 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:33.887277 master-0 kubenswrapper[4119]: I0216 20:56:33.887233 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 20:56:34.884386 master-0 kubenswrapper[4119]: I0216 20:56:34.884323 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:34.885314 master-0 kubenswrapper[4119]: E0216 20:56:34.885240 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:35.051583 master-0 kubenswrapper[4119]: I0216 20:56:35.051463 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=7.051408821 podStartE2EDuration="7.051408821s" podCreationTimestamp="2026-02-16 20:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:56:35.04943362 +0000 UTC m=+90.979359638" watchObservedRunningTime="2026-02-16 20:56:35.051408821 +0000 UTC m=+90.981334839" Feb 16 20:56:35.071241 master-0 kubenswrapper[4119]: I0216 20:56:35.071156 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=2.071136939 podStartE2EDuration="2.071136939s" podCreationTimestamp="2026-02-16 20:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:56:35.068975983 +0000 UTC m=+90.998902011" watchObservedRunningTime="2026-02-16 20:56:35.071136939 +0000 UTC m=+91.001062967" Feb 16 20:56:35.874863 master-0 kubenswrapper[4119]: I0216 20:56:35.874812 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:35.875094 master-0 kubenswrapper[4119]: E0216 20:56:35.874948 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:36.092642 master-0 kubenswrapper[4119]: I0216 20:56:36.092577 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:36.093109 master-0 kubenswrapper[4119]: E0216 20:56:36.092765 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:36.093109 master-0 kubenswrapper[4119]: E0216 20:56:36.092784 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:36.093109 master-0 kubenswrapper[4119]: E0216 20:56:36.092794 4119 projected.go:194] Error preparing data for projected volume kube-api-access-kcp5t for pod openshift-network-diagnostics/network-check-target-68c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:36.093109 master-0 kubenswrapper[4119]: E0216 20:56:36.092848 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t podName:0d903d23-8e0b-424b-bcd0-e0a00f306e49 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:52.092835115 +0000 UTC m=+108.022761123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-kcp5t" (UniqueName: "kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t") pod "network-check-target-68c25" (UID: "0d903d23-8e0b-424b-bcd0-e0a00f306e49") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:36.874816 master-0 kubenswrapper[4119]: I0216 20:56:36.874744 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:36.875143 master-0 kubenswrapper[4119]: E0216 20:56:36.874906 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:37.731852 master-0 kubenswrapper[4119]: I0216 20:56:37.731609 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:37.732900 master-0 kubenswrapper[4119]: E0216 20:56:37.732825 4119 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:37.734047 master-0 kubenswrapper[4119]: E0216 20:56:37.733035 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:57:09.732986149 +0000 UTC m=+125.662912167 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 20:56:37.874508 master-0 kubenswrapper[4119]: I0216 20:56:37.874421 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:37.874749 master-0 kubenswrapper[4119]: E0216 20:56:37.874611 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:38.742690 master-0 kubenswrapper[4119]: I0216 20:56:38.742469 4119 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="c4606e99d38ef423f540d128546208027e050c83b7e8385117d1ac9efe8a49dd" exitCode=0 Feb 16 20:56:38.742690 master-0 kubenswrapper[4119]: I0216 20:56:38.742549 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"c4606e99d38ef423f540d128546208027e050c83b7e8385117d1ac9efe8a49dd"} Feb 16 20:56:38.746120 master-0 kubenswrapper[4119]: I0216 20:56:38.746069 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" event={"ID":"484154d0-66c8-4d0e-bf1b-f48d0abfe628","Type":"ContainerStarted","Data":"fd75cc94a5c6af861419130cf9adb9c00eea8b412cbb5bebb25e798a841c1376"} Feb 16 20:56:38.748036 master-0 kubenswrapper[4119]: I0216 20:56:38.747974 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerDied","Data":"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658"} Feb 16 20:56:38.749030 master-0 kubenswrapper[4119]: I0216 20:56:38.747879 4119 generic.go:334] "Generic (PLEG): container finished" podID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerID="db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658" exitCode=0 Feb 16 20:56:38.781728 master-0 kubenswrapper[4119]: I0216 20:56:38.781309 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" podStartSLOduration=3.16807103 podStartE2EDuration="21.781274019s" podCreationTimestamp="2026-02-16 20:56:17 +0000 UTC" firstStartedPulling="2026-02-16 20:56:19.359512404 +0000 UTC m=+75.289438422" lastFinishedPulling="2026-02-16 20:56:37.972715393 +0000 UTC m=+93.902641411" observedRunningTime="2026-02-16 20:56:38.77857481 +0000 UTC m=+94.708500828" watchObservedRunningTime="2026-02-16 20:56:38.781274019 +0000 UTC m=+94.711200037" Feb 16 20:56:38.875005 master-0 kubenswrapper[4119]: I0216 20:56:38.874751 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:38.875110 master-0 kubenswrapper[4119]: E0216 20:56:38.875079 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:39.756810 master-0 kubenswrapper[4119]: I0216 20:56:39.756763 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerStarted","Data":"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e"} Feb 16 20:56:39.756810 master-0 kubenswrapper[4119]: I0216 20:56:39.756811 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerStarted","Data":"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997"} Feb 16 20:56:39.756810 master-0 kubenswrapper[4119]: I0216 20:56:39.756820 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerStarted","Data":"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4"} Feb 16 20:56:39.758403 master-0 kubenswrapper[4119]: I0216 20:56:39.756830 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerStarted","Data":"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241"} Feb 16 20:56:39.758403 master-0 kubenswrapper[4119]: I0216 20:56:39.756838 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerStarted","Data":"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4"} Feb 16 20:56:39.758403 master-0 kubenswrapper[4119]: I0216 20:56:39.756846 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerStarted","Data":"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02"} Feb 16 20:56:39.760207 master-0 kubenswrapper[4119]: I0216 20:56:39.760163 4119 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="0213e2c5badfad1c445275191896cc5e9028427f3090c086deb48f44170a8559" exitCode=0 Feb 16 20:56:39.761091 master-0 kubenswrapper[4119]: I0216 20:56:39.761050 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"0213e2c5badfad1c445275191896cc5e9028427f3090c086deb48f44170a8559"} Feb 16 20:56:39.875359 master-0 kubenswrapper[4119]: I0216 20:56:39.874836 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:39.875359 master-0 kubenswrapper[4119]: E0216 20:56:39.874988 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:40.772808 master-0 kubenswrapper[4119]: I0216 20:56:40.772321 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerStarted","Data":"f002cef497bb8bbd088c37fab5b84fc213593b368b6c57fc1b2ebfc210f79c29"} Feb 16 20:56:40.774713 master-0 kubenswrapper[4119]: I0216 20:56:40.774429 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-tpj6f" event={"ID":"88f19cea-60ed-4977-a906-75deec51fc3d","Type":"ContainerStarted","Data":"d0734d0596c43a54e8c5763783b157c38da058f6ee7d80add1702898fd0efe5d"} Feb 16 20:56:40.774713 master-0 kubenswrapper[4119]: I0216 20:56:40.774491 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-tpj6f" event={"ID":"88f19cea-60ed-4977-a906-75deec51fc3d","Type":"ContainerStarted","Data":"f0fb0335aec7d732c2c504647e8162c4e320963f1f173436478e3f5209ced684"} Feb 16 20:56:40.874991 master-0 kubenswrapper[4119]: I0216 20:56:40.874928 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:40.875294 master-0 kubenswrapper[4119]: E0216 20:56:40.875053 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:41.014452 master-0 kubenswrapper[4119]: I0216 20:56:41.014304 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" podStartSLOduration=4.299369835 podStartE2EDuration="37.014277903s" podCreationTimestamp="2026-02-16 20:56:04 +0000 UTC" firstStartedPulling="2026-02-16 20:56:05.198430796 +0000 UTC m=+61.128356814" lastFinishedPulling="2026-02-16 20:56:37.913338864 +0000 UTC m=+93.843264882" observedRunningTime="2026-02-16 20:56:41.014059558 +0000 UTC m=+96.943985596" watchObservedRunningTime="2026-02-16 20:56:41.014277903 +0000 UTC m=+96.944203961" Feb 16 20:56:41.161930 master-0 kubenswrapper[4119]: I0216 20:56:41.160491 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-tpj6f" podStartSLOduration=2.4946611819999998 podStartE2EDuration="18.160469328s" podCreationTimestamp="2026-02-16 20:56:23 +0000 UTC" firstStartedPulling="2026-02-16 20:56:23.966965575 +0000 UTC m=+79.896891603" lastFinishedPulling="2026-02-16 20:56:39.632773691 +0000 UTC m=+95.562699749" observedRunningTime="2026-02-16 20:56:41.160165541 +0000 UTC m=+97.090091559" watchObservedRunningTime="2026-02-16 20:56:41.160469328 +0000 UTC m=+97.090395346" Feb 16 20:56:41.781559 master-0 kubenswrapper[4119]: I0216 20:56:41.781516 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerStarted","Data":"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab"} Feb 16 20:56:41.874606 master-0 kubenswrapper[4119]: I0216 20:56:41.874532 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:41.874881 master-0 kubenswrapper[4119]: E0216 20:56:41.874822 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:42.114404 master-0 kubenswrapper[4119]: I0216 20:56:42.114229 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 20:56:42.876129 master-0 kubenswrapper[4119]: I0216 20:56:42.876008 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:42.877219 master-0 kubenswrapper[4119]: E0216 20:56:42.876274 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:43.654271 master-0 kubenswrapper[4119]: I0216 20:56:43.654196 4119 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lprkk"] Feb 16 20:56:43.875415 master-0 kubenswrapper[4119]: I0216 20:56:43.875338 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:43.875663 master-0 kubenswrapper[4119]: E0216 20:56:43.875533 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:44.797395 master-0 kubenswrapper[4119]: I0216 20:56:44.797324 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerStarted","Data":"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e"} Feb 16 20:56:44.798369 master-0 kubenswrapper[4119]: I0216 20:56:44.797673 4119 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="sbdb" containerID="cri-o://c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" gracePeriod=30 Feb 16 20:56:44.798369 master-0 kubenswrapper[4119]: I0216 20:56:44.797680 4119 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovn-controller" containerID="cri-o://cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02" gracePeriod=30 Feb 16 20:56:44.798369 master-0 kubenswrapper[4119]: I0216 20:56:44.797730 4119 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="nbdb" containerID="cri-o://dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" gracePeriod=30 Feb 16 20:56:44.798369 master-0 kubenswrapper[4119]: I0216 20:56:44.797691 4119 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4" gracePeriod=30 Feb 16 20:56:44.798369 master-0 kubenswrapper[4119]: I0216 20:56:44.797768 4119 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="kube-rbac-proxy-node" containerID="cri-o://03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241" gracePeriod=30 Feb 16 20:56:44.798369 master-0 kubenswrapper[4119]: I0216 20:56:44.797835 4119 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovn-acl-logging" containerID="cri-o://1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4" gracePeriod=30 Feb 16 20:56:44.798369 master-0 kubenswrapper[4119]: I0216 20:56:44.797882 4119 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="northd" containerID="cri-o://f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997" gracePeriod=30 Feb 16 20:56:44.798369 master-0 kubenswrapper[4119]: I0216 20:56:44.797930 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:44.798369 master-0 kubenswrapper[4119]: I0216 20:56:44.798079 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:44.798369 master-0 kubenswrapper[4119]: I0216 20:56:44.798092 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:44.802409 master-0 kubenswrapper[4119]: E0216 20:56:44.801880 4119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 20:56:44.802409 master-0 kubenswrapper[4119]: E0216 20:56:44.802199 4119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 20:56:44.804338 master-0 kubenswrapper[4119]: E0216 20:56:44.804036 4119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 20:56:44.804338 master-0 kubenswrapper[4119]: E0216 20:56:44.804276 4119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 20:56:44.808413 master-0 kubenswrapper[4119]: E0216 20:56:44.806599 4119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 16 20:56:44.808413 master-0 kubenswrapper[4119]: E0216 20:56:44.806701 4119 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 16 20:56:44.808413 master-0 kubenswrapper[4119]: E0216 20:56:44.806768 4119 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="sbdb" Feb 16 20:56:44.808413 master-0 kubenswrapper[4119]: E0216 20:56:44.806706 4119 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="nbdb" Feb 16 20:56:44.852726 master-0 kubenswrapper[4119]: I0216 20:56:44.833762 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" podStartSLOduration=8.98568186 podStartE2EDuration="27.833733879s" podCreationTimestamp="2026-02-16 20:56:17 +0000 UTC" firstStartedPulling="2026-02-16 20:56:19.149033572 +0000 UTC m=+75.078959620" lastFinishedPulling="2026-02-16 20:56:37.997085621 +0000 UTC m=+93.927011639" observedRunningTime="2026-02-16 20:56:44.833272566 +0000 UTC m=+100.763198594" watchObservedRunningTime="2026-02-16 20:56:44.833733879 +0000 UTC m=+100.763659917" Feb 16 20:56:44.859309 master-0 kubenswrapper[4119]: I0216 20:56:44.859206 4119 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovnkube-controller" containerID="cri-o://6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e" gracePeriod=30 Feb 16 20:56:44.861413 master-0 kubenswrapper[4119]: I0216 20:56:44.860780 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=3.860753605 podStartE2EDuration="3.860753605s" podCreationTimestamp="2026-02-16 20:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:56:44.859466711 +0000 UTC m=+100.789392739" watchObservedRunningTime="2026-02-16 20:56:44.860753605 +0000 UTC m=+100.790679643" Feb 16 20:56:44.874548 master-0 kubenswrapper[4119]: I0216 20:56:44.874476 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:44.875393 master-0 kubenswrapper[4119]: E0216 20:56:44.875323 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:45.735490 master-0 kubenswrapper[4119]: I0216 20:56:45.734885 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lprkk_7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4/ovnkube-controller/0.log" Feb 16 20:56:45.737973 master-0 kubenswrapper[4119]: I0216 20:56:45.737921 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lprkk_7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4/kube-rbac-proxy-ovn-metrics/0.log" Feb 16 20:56:45.738562 master-0 kubenswrapper[4119]: I0216 20:56:45.738527 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lprkk_7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4/kube-rbac-proxy-node/0.log" Feb 16 20:56:45.739040 master-0 kubenswrapper[4119]: I0216 20:56:45.739005 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lprkk_7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4/ovn-acl-logging/0.log" Feb 16 20:56:45.739592 master-0 kubenswrapper[4119]: I0216 20:56:45.739553 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lprkk_7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4/ovn-controller/0.log" Feb 16 20:56:45.740186 master-0 kubenswrapper[4119]: I0216 20:56:45.740137 4119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:45.797183 master-0 kubenswrapper[4119]: I0216 20:56:45.797131 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z8h4n"] Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: E0216 20:56:45.797223 4119 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovn-controller" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797235 4119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovn-controller" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: E0216 20:56:45.797253 4119 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovn-acl-logging" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797259 4119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovn-acl-logging" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: E0216 20:56:45.797266 4119 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="nbdb" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797273 4119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="nbdb" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: E0216 20:56:45.797280 4119 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovnkube-controller" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797286 4119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovnkube-controller" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: E0216 20:56:45.797292 4119 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="kube-rbac-proxy-node" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797298 4119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="kube-rbac-proxy-node" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: E0216 20:56:45.797305 4119 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="sbdb" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797311 4119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="sbdb" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: E0216 20:56:45.797318 4119 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="kubecfg-setup" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797323 4119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="kubecfg-setup" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: E0216 20:56:45.797329 4119 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="northd" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797335 4119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="northd" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: E0216 20:56:45.797341 4119 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797346 4119 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797381 4119 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovn-controller" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797391 4119 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="kube-rbac-proxy-node" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797397 4119 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797403 4119 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="northd" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797409 4119 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovnkube-controller" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797414 4119 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="sbdb" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797419 4119 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="nbdb" Feb 16 20:56:45.797498 master-0 kubenswrapper[4119]: I0216 20:56:45.797426 4119 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerName="ovn-acl-logging" Feb 16 20:56:45.799248 master-0 kubenswrapper[4119]: I0216 20:56:45.798028 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.801333 master-0 kubenswrapper[4119]: I0216 20:56:45.801303 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lprkk_7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4/ovnkube-controller/0.log" Feb 16 20:56:45.805774 master-0 kubenswrapper[4119]: I0216 20:56:45.805743 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lprkk_7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4/kube-rbac-proxy-ovn-metrics/0.log" Feb 16 20:56:45.806347 master-0 kubenswrapper[4119]: I0216 20:56:45.806310 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lprkk_7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4/kube-rbac-proxy-node/0.log" Feb 16 20:56:45.806867 master-0 kubenswrapper[4119]: I0216 20:56:45.806841 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lprkk_7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4/ovn-acl-logging/0.log" Feb 16 20:56:45.807108 master-0 kubenswrapper[4119]: I0216 20:56:45.807082 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-netd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807189 master-0 kubenswrapper[4119]: I0216 20:56:45.807121 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-log-socket\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807189 master-0 kubenswrapper[4119]: I0216 20:56:45.807167 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-var-lib-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807261 master-0 kubenswrapper[4119]: I0216 20:56:45.807190 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807261 master-0 kubenswrapper[4119]: I0216 20:56:45.807224 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqm46\" (UniqueName: \"kubernetes.io/projected/69785167-b4ae-415b-bdcb-029f62effe78-kube-api-access-dqm46\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807335 master-0 kubenswrapper[4119]: I0216 20:56:45.807307 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-kubelet\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807412 master-0 kubenswrapper[4119]: I0216 20:56:45.807380 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-netns\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807475 master-0 kubenswrapper[4119]: I0216 20:56:45.807419 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-systemd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807475 master-0 kubenswrapper[4119]: I0216 20:56:45.807446 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69785167-b4ae-415b-bdcb-029f62effe78-ovn-node-metrics-cert\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807555 master-0 kubenswrapper[4119]: I0216 20:56:45.807477 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-ovn\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807555 master-0 kubenswrapper[4119]: I0216 20:56:45.807499 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-etc-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807555 master-0 kubenswrapper[4119]: I0216 20:56:45.807519 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-env-overrides\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807721 master-0 kubenswrapper[4119]: I0216 20:56:45.807683 4119 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lprkk_7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4/ovn-controller/0.log" Feb 16 20:56:45.807804 master-0 kubenswrapper[4119]: I0216 20:56:45.807695 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-systemd-units\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807882 master-0 kubenswrapper[4119]: I0216 20:56:45.807822 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-config\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807882 master-0 kubenswrapper[4119]: I0216 20:56:45.807856 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-slash\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807961 master-0 kubenswrapper[4119]: I0216 20:56:45.807877 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-script-lib\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807961 master-0 kubenswrapper[4119]: I0216 20:56:45.807909 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-node-log\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807961 master-0 kubenswrapper[4119]: I0216 20:56:45.807930 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-bin\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.807961 master-0 kubenswrapper[4119]: I0216 20:56:45.807951 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.808099 master-0 kubenswrapper[4119]: I0216 20:56:45.808019 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.812056 master-0 kubenswrapper[4119]: I0216 20:56:45.811999 4119 generic.go:334] "Generic (PLEG): container finished" podID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerID="6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e" exitCode=1 Feb 16 20:56:45.812056 master-0 kubenswrapper[4119]: I0216 20:56:45.812036 4119 generic.go:334] "Generic (PLEG): container finished" podID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerID="c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" exitCode=0 Feb 16 20:56:45.812056 master-0 kubenswrapper[4119]: I0216 20:56:45.812044 4119 generic.go:334] "Generic (PLEG): container finished" podID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerID="dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" exitCode=0 Feb 16 20:56:45.812056 master-0 kubenswrapper[4119]: I0216 20:56:45.812054 4119 generic.go:334] "Generic (PLEG): container finished" podID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerID="f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997" exitCode=0 Feb 16 20:56:45.812056 master-0 kubenswrapper[4119]: I0216 20:56:45.812060 4119 generic.go:334] "Generic (PLEG): container finished" podID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerID="ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4" exitCode=143 Feb 16 20:56:45.812056 master-0 kubenswrapper[4119]: I0216 20:56:45.812068 4119 generic.go:334] "Generic (PLEG): container finished" podID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerID="03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241" exitCode=143 Feb 16 20:56:45.812358 master-0 kubenswrapper[4119]: I0216 20:56:45.812078 4119 generic.go:334] "Generic (PLEG): container finished" podID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerID="1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4" exitCode=143 Feb 16 20:56:45.812358 master-0 kubenswrapper[4119]: I0216 20:56:45.812085 4119 generic.go:334] "Generic (PLEG): container finished" podID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" containerID="cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02" exitCode=143 Feb 16 20:56:45.812358 master-0 kubenswrapper[4119]: I0216 20:56:45.812110 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerDied","Data":"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e"} Feb 16 20:56:45.812358 master-0 kubenswrapper[4119]: I0216 20:56:45.812150 4119 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" Feb 16 20:56:45.812358 master-0 kubenswrapper[4119]: I0216 20:56:45.812170 4119 scope.go:117] "RemoveContainer" containerID="6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e" Feb 16 20:56:45.812358 master-0 kubenswrapper[4119]: I0216 20:56:45.812154 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerDied","Data":"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab"} Feb 16 20:56:45.812358 master-0 kubenswrapper[4119]: I0216 20:56:45.812347 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerDied","Data":"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e"} Feb 16 20:56:45.812358 master-0 kubenswrapper[4119]: I0216 20:56:45.812359 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerDied","Data":"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997"} Feb 16 20:56:45.812358 master-0 kubenswrapper[4119]: I0216 20:56:45.812370 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerDied","Data":"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812381 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerDied","Data":"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812395 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812480 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812486 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812494 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerDied","Data":"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812501 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812509 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812514 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812519 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812524 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812530 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812535 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812540 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812546 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812553 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerDied","Data":"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812560 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812566 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812572 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812577 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812584 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812590 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812596 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812601 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812607 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812613 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lprkk" event={"ID":"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4","Type":"ContainerDied","Data":"b6dea92e798df20bff3cbf3fd8ad2002fbf085941657704cd0d299b16f8d448b"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812622 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e"} Feb 16 20:56:45.812701 master-0 kubenswrapper[4119]: I0216 20:56:45.812628 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab"} Feb 16 20:56:45.814266 master-0 kubenswrapper[4119]: I0216 20:56:45.812635 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e"} Feb 16 20:56:45.814266 master-0 kubenswrapper[4119]: I0216 20:56:45.812640 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997"} Feb 16 20:56:45.814266 master-0 kubenswrapper[4119]: I0216 20:56:45.812666 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4"} Feb 16 20:56:45.814266 master-0 kubenswrapper[4119]: I0216 20:56:45.812673 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241"} Feb 16 20:56:45.814266 master-0 kubenswrapper[4119]: I0216 20:56:45.812679 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4"} Feb 16 20:56:45.814266 master-0 kubenswrapper[4119]: I0216 20:56:45.812684 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02"} Feb 16 20:56:45.814266 master-0 kubenswrapper[4119]: I0216 20:56:45.812691 4119 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658"} Feb 16 20:56:45.831467 master-0 kubenswrapper[4119]: I0216 20:56:45.831415 4119 scope.go:117] "RemoveContainer" containerID="c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" Feb 16 20:56:45.845181 master-0 kubenswrapper[4119]: I0216 20:56:45.845142 4119 scope.go:117] "RemoveContainer" containerID="dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" Feb 16 20:56:45.857643 master-0 kubenswrapper[4119]: I0216 20:56:45.857600 4119 scope.go:117] "RemoveContainer" containerID="f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997" Feb 16 20:56:45.871615 master-0 kubenswrapper[4119]: I0216 20:56:45.871562 4119 scope.go:117] "RemoveContainer" containerID="ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4" Feb 16 20:56:45.875253 master-0 kubenswrapper[4119]: I0216 20:56:45.874946 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:45.875253 master-0 kubenswrapper[4119]: E0216 20:56:45.875083 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:45.885824 master-0 kubenswrapper[4119]: I0216 20:56:45.885792 4119 scope.go:117] "RemoveContainer" containerID="03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241" Feb 16 20:56:45.896582 master-0 kubenswrapper[4119]: I0216 20:56:45.896534 4119 scope.go:117] "RemoveContainer" containerID="1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4" Feb 16 20:56:45.907809 master-0 kubenswrapper[4119]: I0216 20:56:45.907749 4119 scope.go:117] "RemoveContainer" containerID="cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02" Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910221 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-openvswitch\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910305 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-systemd\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910348 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-etc-openvswitch\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910390 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-bin\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910440 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovn-node-metrics-cert\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910478 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-ovn-kubernetes\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910518 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwhxb\" (UniqueName: \"kubernetes.io/projected/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-kube-api-access-cwhxb\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910597 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910636 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-node-log\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910691 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-ovn\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910725 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-systemd-units\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910764 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-env-overrides\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910797 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-netd\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910826 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-var-lib-openvswitch\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910862 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-slash\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910900 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-netns\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910906 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.911102 master-0 kubenswrapper[4119]: I0216 20:56:45.910923 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-kubelet\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911021 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911024 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-config\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911091 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-log-socket\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911122 4119 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-script-lib\") pod \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\" (UID: \"7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4\") " Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911355 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-var-lib-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911482 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911525 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqm46\" (UniqueName: \"kubernetes.io/projected/69785167-b4ae-415b-bdcb-029f62effe78-kube-api-access-dqm46\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911525 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911569 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911591 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911621 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911702 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-kubelet\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911775 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69785167-b4ae-415b-bdcb-029f62effe78-ovn-node-metrics-cert\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911804 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-netns\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.911872 master-0 kubenswrapper[4119]: I0216 20:56:45.911863 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-systemd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912305 master-0 kubenswrapper[4119]: I0216 20:56:45.911905 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-ovn\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912305 master-0 kubenswrapper[4119]: I0216 20:56:45.911969 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-etc-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912305 master-0 kubenswrapper[4119]: I0216 20:56:45.911993 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-env-overrides\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912305 master-0 kubenswrapper[4119]: I0216 20:56:45.912137 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-systemd-units\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912305 master-0 kubenswrapper[4119]: I0216 20:56:45.912227 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-config\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912305 master-0 kubenswrapper[4119]: I0216 20:56:45.912267 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-slash\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912493 master-0 kubenswrapper[4119]: I0216 20:56:45.912315 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-script-lib\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912493 master-0 kubenswrapper[4119]: I0216 20:56:45.912346 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-node-log\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912493 master-0 kubenswrapper[4119]: I0216 20:56:45.912388 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-bin\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912493 master-0 kubenswrapper[4119]: I0216 20:56:45.912443 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912633 master-0 kubenswrapper[4119]: I0216 20:56:45.912523 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912633 master-0 kubenswrapper[4119]: I0216 20:56:45.912570 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-netd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912633 master-0 kubenswrapper[4119]: I0216 20:56:45.912606 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-log-socket\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.912859 master-0 kubenswrapper[4119]: I0216 20:56:45.912772 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.912859 master-0 kubenswrapper[4119]: I0216 20:56:45.912830 4119 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-systemd-units\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:45.912925 master-0 kubenswrapper[4119]: I0216 20:56:45.912862 4119 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-kubelet\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:45.912925 master-0 kubenswrapper[4119]: I0216 20:56:45.912884 4119 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:45.912925 master-0 kubenswrapper[4119]: I0216 20:56:45.912908 4119 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:45.913048 master-0 kubenswrapper[4119]: I0216 20:56:45.912938 4119 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:45.913048 master-0 kubenswrapper[4119]: I0216 20:56:45.912961 4119 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:45.913048 master-0 kubenswrapper[4119]: I0216 20:56:45.912891 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.913420 master-0 kubenswrapper[4119]: I0216 20:56:45.913340 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-systemd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.913420 master-0 kubenswrapper[4119]: I0216 20:56:45.913368 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-ovn\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.913485 master-0 kubenswrapper[4119]: I0216 20:56:45.913459 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.913517 master-0 kubenswrapper[4119]: I0216 20:56:45.913482 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-etc-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.913547 master-0 kubenswrapper[4119]: I0216 20:56:45.913520 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.913673 master-0 kubenswrapper[4119]: I0216 20:56:45.913606 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-slash" (OuterVolumeSpecName: "host-slash") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.913673 master-0 kubenswrapper[4119]: I0216 20:56:45.913632 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-slash\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.913740 master-0 kubenswrapper[4119]: I0216 20:56:45.913672 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-kubelet\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.913740 master-0 kubenswrapper[4119]: I0216 20:56:45.913682 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.913740 master-0 kubenswrapper[4119]: I0216 20:56:45.913707 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-log-socket" (OuterVolumeSpecName: "log-socket") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.913740 master-0 kubenswrapper[4119]: I0216 20:56:45.913731 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-systemd-units\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.913843 master-0 kubenswrapper[4119]: I0216 20:56:45.913745 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.913843 master-0 kubenswrapper[4119]: I0216 20:56:45.913770 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-netns\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.913933 master-0 kubenswrapper[4119]: I0216 20:56:45.913845 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-node-log" (OuterVolumeSpecName: "node-log") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.913933 master-0 kubenswrapper[4119]: I0216 20:56:45.913873 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.915416 master-0 kubenswrapper[4119]: I0216 20:56:45.914907 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-env-overrides\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.915416 master-0 kubenswrapper[4119]: I0216 20:56:45.915386 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-node-log\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.916376 master-0 kubenswrapper[4119]: I0216 20:56:45.915461 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:45.916376 master-0 kubenswrapper[4119]: I0216 20:56:45.915484 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.916376 master-0 kubenswrapper[4119]: I0216 20:56:45.915536 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:56:45.916376 master-0 kubenswrapper[4119]: I0216 20:56:45.915574 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-bin\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.916376 master-0 kubenswrapper[4119]: I0216 20:56:45.915591 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.916376 master-0 kubenswrapper[4119]: I0216 20:56:45.915682 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-var-lib-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.916376 master-0 kubenswrapper[4119]: I0216 20:56:45.915730 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-netd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.916376 master-0 kubenswrapper[4119]: I0216 20:56:45.915798 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-log-socket\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.917285 master-0 kubenswrapper[4119]: I0216 20:56:45.917091 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-config\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.917356 master-0 kubenswrapper[4119]: I0216 20:56:45.917315 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-script-lib\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.917666 master-0 kubenswrapper[4119]: I0216 20:56:45.917603 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:56:45.917843 master-0 kubenswrapper[4119]: I0216 20:56:45.917811 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-kube-api-access-cwhxb" (OuterVolumeSpecName: "kube-api-access-cwhxb") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "kube-api-access-cwhxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:56:45.921443 master-0 kubenswrapper[4119]: I0216 20:56:45.921408 4119 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" (UID: "7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:56:45.926091 master-0 kubenswrapper[4119]: I0216 20:56:45.926042 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69785167-b4ae-415b-bdcb-029f62effe78-ovn-node-metrics-cert\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.934529 master-0 kubenswrapper[4119]: I0216 20:56:45.934468 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqm46\" (UniqueName: \"kubernetes.io/projected/69785167-b4ae-415b-bdcb-029f62effe78-kube-api-access-dqm46\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:45.935739 master-0 kubenswrapper[4119]: I0216 20:56:45.935684 4119 scope.go:117] "RemoveContainer" containerID="db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658" Feb 16 20:56:45.948336 master-0 kubenswrapper[4119]: I0216 20:56:45.948104 4119 scope.go:117] "RemoveContainer" containerID="6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e" Feb 16 20:56:45.948717 master-0 kubenswrapper[4119]: E0216 20:56:45.948665 4119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e\": container with ID starting with 6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e not found: ID does not exist" containerID="6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e" Feb 16 20:56:45.948801 master-0 kubenswrapper[4119]: I0216 20:56:45.948734 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e"} err="failed to get container status \"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e\": rpc error: code = NotFound desc = could not find container \"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e\": container with ID starting with 6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e not found: ID does not exist" Feb 16 20:56:45.948801 master-0 kubenswrapper[4119]: I0216 20:56:45.948783 4119 scope.go:117] "RemoveContainer" containerID="c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" Feb 16 20:56:45.949620 master-0 kubenswrapper[4119]: E0216 20:56:45.949554 4119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab\": container with ID starting with c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab not found: ID does not exist" containerID="c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" Feb 16 20:56:45.949702 master-0 kubenswrapper[4119]: I0216 20:56:45.949620 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab"} err="failed to get container status \"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab\": rpc error: code = NotFound desc = could not find container \"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab\": container with ID starting with c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab not found: ID does not exist" Feb 16 20:56:45.949702 master-0 kubenswrapper[4119]: I0216 20:56:45.949695 4119 scope.go:117] "RemoveContainer" containerID="dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" Feb 16 20:56:45.950079 master-0 kubenswrapper[4119]: E0216 20:56:45.950034 4119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e\": container with ID starting with dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e not found: ID does not exist" containerID="dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" Feb 16 20:56:45.950143 master-0 kubenswrapper[4119]: I0216 20:56:45.950074 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e"} err="failed to get container status \"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e\": rpc error: code = NotFound desc = could not find container \"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e\": container with ID starting with dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e not found: ID does not exist" Feb 16 20:56:45.950143 master-0 kubenswrapper[4119]: I0216 20:56:45.950098 4119 scope.go:117] "RemoveContainer" containerID="f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997" Feb 16 20:56:45.950718 master-0 kubenswrapper[4119]: E0216 20:56:45.950677 4119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997\": container with ID starting with f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997 not found: ID does not exist" containerID="f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997" Feb 16 20:56:45.950776 master-0 kubenswrapper[4119]: I0216 20:56:45.950719 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997"} err="failed to get container status \"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997\": rpc error: code = NotFound desc = could not find container \"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997\": container with ID starting with f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997 not found: ID does not exist" Feb 16 20:56:45.950776 master-0 kubenswrapper[4119]: I0216 20:56:45.950746 4119 scope.go:117] "RemoveContainer" containerID="ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4" Feb 16 20:56:45.951337 master-0 kubenswrapper[4119]: E0216 20:56:45.951292 4119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4\": container with ID starting with ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4 not found: ID does not exist" containerID="ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4" Feb 16 20:56:45.951373 master-0 kubenswrapper[4119]: I0216 20:56:45.951333 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4"} err="failed to get container status \"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4\": rpc error: code = NotFound desc = could not find container \"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4\": container with ID starting with ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4 not found: ID does not exist" Feb 16 20:56:45.951373 master-0 kubenswrapper[4119]: I0216 20:56:45.951363 4119 scope.go:117] "RemoveContainer" containerID="03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241" Feb 16 20:56:45.951729 master-0 kubenswrapper[4119]: E0216 20:56:45.951625 4119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241\": container with ID starting with 03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241 not found: ID does not exist" containerID="03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241" Feb 16 20:56:45.951771 master-0 kubenswrapper[4119]: I0216 20:56:45.951728 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241"} err="failed to get container status \"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241\": rpc error: code = NotFound desc = could not find container \"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241\": container with ID starting with 03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241 not found: ID does not exist" Feb 16 20:56:45.951771 master-0 kubenswrapper[4119]: I0216 20:56:45.951755 4119 scope.go:117] "RemoveContainer" containerID="1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4" Feb 16 20:56:45.952087 master-0 kubenswrapper[4119]: E0216 20:56:45.952045 4119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4\": container with ID starting with 1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4 not found: ID does not exist" containerID="1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4" Feb 16 20:56:45.952122 master-0 kubenswrapper[4119]: I0216 20:56:45.952083 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4"} err="failed to get container status \"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4\": rpc error: code = NotFound desc = could not find container \"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4\": container with ID starting with 1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4 not found: ID does not exist" Feb 16 20:56:45.952122 master-0 kubenswrapper[4119]: I0216 20:56:45.952109 4119 scope.go:117] "RemoveContainer" containerID="cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02" Feb 16 20:56:45.952413 master-0 kubenswrapper[4119]: E0216 20:56:45.952373 4119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02\": container with ID starting with cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02 not found: ID does not exist" containerID="cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02" Feb 16 20:56:45.952444 master-0 kubenswrapper[4119]: I0216 20:56:45.952412 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02"} err="failed to get container status \"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02\": rpc error: code = NotFound desc = could not find container \"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02\": container with ID starting with cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02 not found: ID does not exist" Feb 16 20:56:45.952444 master-0 kubenswrapper[4119]: I0216 20:56:45.952437 4119 scope.go:117] "RemoveContainer" containerID="db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658" Feb 16 20:56:45.952782 master-0 kubenswrapper[4119]: E0216 20:56:45.952738 4119 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658\": container with ID starting with db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658 not found: ID does not exist" containerID="db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658" Feb 16 20:56:45.952830 master-0 kubenswrapper[4119]: I0216 20:56:45.952780 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658"} err="failed to get container status \"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658\": rpc error: code = NotFound desc = could not find container \"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658\": container with ID starting with db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658 not found: ID does not exist" Feb 16 20:56:45.952830 master-0 kubenswrapper[4119]: I0216 20:56:45.952808 4119 scope.go:117] "RemoveContainer" containerID="6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e" Feb 16 20:56:45.953353 master-0 kubenswrapper[4119]: I0216 20:56:45.953308 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e"} err="failed to get container status \"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e\": rpc error: code = NotFound desc = could not find container \"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e\": container with ID starting with 6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e not found: ID does not exist" Feb 16 20:56:45.953353 master-0 kubenswrapper[4119]: I0216 20:56:45.953345 4119 scope.go:117] "RemoveContainer" containerID="c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" Feb 16 20:56:45.953890 master-0 kubenswrapper[4119]: I0216 20:56:45.953836 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab"} err="failed to get container status \"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab\": rpc error: code = NotFound desc = could not find container \"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab\": container with ID starting with c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab not found: ID does not exist" Feb 16 20:56:45.953890 master-0 kubenswrapper[4119]: I0216 20:56:45.953878 4119 scope.go:117] "RemoveContainer" containerID="dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" Feb 16 20:56:45.954445 master-0 kubenswrapper[4119]: I0216 20:56:45.954388 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e"} err="failed to get container status \"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e\": rpc error: code = NotFound desc = could not find container \"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e\": container with ID starting with dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e not found: ID does not exist" Feb 16 20:56:45.954445 master-0 kubenswrapper[4119]: I0216 20:56:45.954425 4119 scope.go:117] "RemoveContainer" containerID="f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997" Feb 16 20:56:45.954736 master-0 kubenswrapper[4119]: I0216 20:56:45.954714 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997"} err="failed to get container status \"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997\": rpc error: code = NotFound desc = could not find container \"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997\": container with ID starting with f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997 not found: ID does not exist" Feb 16 20:56:45.954790 master-0 kubenswrapper[4119]: I0216 20:56:45.954741 4119 scope.go:117] "RemoveContainer" containerID="ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4" Feb 16 20:56:45.955290 master-0 kubenswrapper[4119]: I0216 20:56:45.955236 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4"} err="failed to get container status \"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4\": rpc error: code = NotFound desc = could not find container \"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4\": container with ID starting with ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4 not found: ID does not exist" Feb 16 20:56:45.955290 master-0 kubenswrapper[4119]: I0216 20:56:45.955275 4119 scope.go:117] "RemoveContainer" containerID="03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241" Feb 16 20:56:45.955555 master-0 kubenswrapper[4119]: I0216 20:56:45.955505 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241"} err="failed to get container status \"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241\": rpc error: code = NotFound desc = could not find container \"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241\": container with ID starting with 03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241 not found: ID does not exist" Feb 16 20:56:45.955555 master-0 kubenswrapper[4119]: I0216 20:56:45.955542 4119 scope.go:117] "RemoveContainer" containerID="1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4" Feb 16 20:56:45.956020 master-0 kubenswrapper[4119]: I0216 20:56:45.955968 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4"} err="failed to get container status \"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4\": rpc error: code = NotFound desc = could not find container \"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4\": container with ID starting with 1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4 not found: ID does not exist" Feb 16 20:56:45.956020 master-0 kubenswrapper[4119]: I0216 20:56:45.956006 4119 scope.go:117] "RemoveContainer" containerID="cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02" Feb 16 20:56:45.956564 master-0 kubenswrapper[4119]: I0216 20:56:45.956497 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02"} err="failed to get container status \"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02\": rpc error: code = NotFound desc = could not find container \"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02\": container with ID starting with cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02 not found: ID does not exist" Feb 16 20:56:45.956564 master-0 kubenswrapper[4119]: I0216 20:56:45.956543 4119 scope.go:117] "RemoveContainer" containerID="db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658" Feb 16 20:56:45.956856 master-0 kubenswrapper[4119]: I0216 20:56:45.956805 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658"} err="failed to get container status \"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658\": rpc error: code = NotFound desc = could not find container \"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658\": container with ID starting with db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658 not found: ID does not exist" Feb 16 20:56:45.956856 master-0 kubenswrapper[4119]: I0216 20:56:45.956843 4119 scope.go:117] "RemoveContainer" containerID="6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e" Feb 16 20:56:45.957281 master-0 kubenswrapper[4119]: I0216 20:56:45.957239 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e"} err="failed to get container status \"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e\": rpc error: code = NotFound desc = could not find container \"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e\": container with ID starting with 6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e not found: ID does not exist" Feb 16 20:56:45.957281 master-0 kubenswrapper[4119]: I0216 20:56:45.957274 4119 scope.go:117] "RemoveContainer" containerID="c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" Feb 16 20:56:45.958265 master-0 kubenswrapper[4119]: I0216 20:56:45.958194 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab"} err="failed to get container status \"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab\": rpc error: code = NotFound desc = could not find container \"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab\": container with ID starting with c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab not found: ID does not exist" Feb 16 20:56:45.958351 master-0 kubenswrapper[4119]: I0216 20:56:45.958270 4119 scope.go:117] "RemoveContainer" containerID="dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" Feb 16 20:56:45.959517 master-0 kubenswrapper[4119]: I0216 20:56:45.959466 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e"} err="failed to get container status \"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e\": rpc error: code = NotFound desc = could not find container \"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e\": container with ID starting with dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e not found: ID does not exist" Feb 16 20:56:45.959517 master-0 kubenswrapper[4119]: I0216 20:56:45.959517 4119 scope.go:117] "RemoveContainer" containerID="f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997" Feb 16 20:56:45.960109 master-0 kubenswrapper[4119]: I0216 20:56:45.960068 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997"} err="failed to get container status \"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997\": rpc error: code = NotFound desc = could not find container \"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997\": container with ID starting with f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997 not found: ID does not exist" Feb 16 20:56:45.960109 master-0 kubenswrapper[4119]: I0216 20:56:45.960100 4119 scope.go:117] "RemoveContainer" containerID="ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4" Feb 16 20:56:45.960511 master-0 kubenswrapper[4119]: I0216 20:56:45.960454 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4"} err="failed to get container status \"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4\": rpc error: code = NotFound desc = could not find container \"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4\": container with ID starting with ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4 not found: ID does not exist" Feb 16 20:56:45.960511 master-0 kubenswrapper[4119]: I0216 20:56:45.960492 4119 scope.go:117] "RemoveContainer" containerID="03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241" Feb 16 20:56:45.961181 master-0 kubenswrapper[4119]: I0216 20:56:45.961019 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241"} err="failed to get container status \"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241\": rpc error: code = NotFound desc = could not find container \"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241\": container with ID starting with 03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241 not found: ID does not exist" Feb 16 20:56:45.961181 master-0 kubenswrapper[4119]: I0216 20:56:45.961073 4119 scope.go:117] "RemoveContainer" containerID="1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4" Feb 16 20:56:45.961549 master-0 kubenswrapper[4119]: I0216 20:56:45.961444 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4"} err="failed to get container status \"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4\": rpc error: code = NotFound desc = could not find container \"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4\": container with ID starting with 1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4 not found: ID does not exist" Feb 16 20:56:45.961549 master-0 kubenswrapper[4119]: I0216 20:56:45.961481 4119 scope.go:117] "RemoveContainer" containerID="cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02" Feb 16 20:56:45.962014 master-0 kubenswrapper[4119]: I0216 20:56:45.961795 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02"} err="failed to get container status \"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02\": rpc error: code = NotFound desc = could not find container \"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02\": container with ID starting with cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02 not found: ID does not exist" Feb 16 20:56:45.962014 master-0 kubenswrapper[4119]: I0216 20:56:45.961828 4119 scope.go:117] "RemoveContainer" containerID="db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658" Feb 16 20:56:45.962421 master-0 kubenswrapper[4119]: I0216 20:56:45.962352 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658"} err="failed to get container status \"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658\": rpc error: code = NotFound desc = could not find container \"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658\": container with ID starting with db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658 not found: ID does not exist" Feb 16 20:56:45.962421 master-0 kubenswrapper[4119]: I0216 20:56:45.962388 4119 scope.go:117] "RemoveContainer" containerID="6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e" Feb 16 20:56:45.963180 master-0 kubenswrapper[4119]: I0216 20:56:45.963053 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e"} err="failed to get container status \"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e\": rpc error: code = NotFound desc = could not find container \"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e\": container with ID starting with 6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e not found: ID does not exist" Feb 16 20:56:45.963180 master-0 kubenswrapper[4119]: I0216 20:56:45.963084 4119 scope.go:117] "RemoveContainer" containerID="c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" Feb 16 20:56:45.963623 master-0 kubenswrapper[4119]: I0216 20:56:45.963585 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab"} err="failed to get container status \"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab\": rpc error: code = NotFound desc = could not find container \"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab\": container with ID starting with c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab not found: ID does not exist" Feb 16 20:56:45.963623 master-0 kubenswrapper[4119]: I0216 20:56:45.963616 4119 scope.go:117] "RemoveContainer" containerID="dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" Feb 16 20:56:45.964244 master-0 kubenswrapper[4119]: I0216 20:56:45.964091 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e"} err="failed to get container status \"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e\": rpc error: code = NotFound desc = could not find container \"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e\": container with ID starting with dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e not found: ID does not exist" Feb 16 20:56:45.964244 master-0 kubenswrapper[4119]: I0216 20:56:45.964130 4119 scope.go:117] "RemoveContainer" containerID="f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997" Feb 16 20:56:45.965314 master-0 kubenswrapper[4119]: I0216 20:56:45.965236 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997"} err="failed to get container status \"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997\": rpc error: code = NotFound desc = could not find container \"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997\": container with ID starting with f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997 not found: ID does not exist" Feb 16 20:56:45.965431 master-0 kubenswrapper[4119]: I0216 20:56:45.965323 4119 scope.go:117] "RemoveContainer" containerID="ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4" Feb 16 20:56:45.965838 master-0 kubenswrapper[4119]: I0216 20:56:45.965802 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4"} err="failed to get container status \"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4\": rpc error: code = NotFound desc = could not find container \"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4\": container with ID starting with ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4 not found: ID does not exist" Feb 16 20:56:45.965838 master-0 kubenswrapper[4119]: I0216 20:56:45.965835 4119 scope.go:117] "RemoveContainer" containerID="03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241" Feb 16 20:56:45.966415 master-0 kubenswrapper[4119]: I0216 20:56:45.966296 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241"} err="failed to get container status \"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241\": rpc error: code = NotFound desc = could not find container \"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241\": container with ID starting with 03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241 not found: ID does not exist" Feb 16 20:56:45.966415 master-0 kubenswrapper[4119]: I0216 20:56:45.966324 4119 scope.go:117] "RemoveContainer" containerID="1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4" Feb 16 20:56:45.967699 master-0 kubenswrapper[4119]: I0216 20:56:45.967631 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4"} err="failed to get container status \"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4\": rpc error: code = NotFound desc = could not find container \"1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4\": container with ID starting with 1633aa67ecb7c1d55dfc529ba6b4efde30dfde6794eca537a618be01e1b6a1e4 not found: ID does not exist" Feb 16 20:56:45.967699 master-0 kubenswrapper[4119]: I0216 20:56:45.967692 4119 scope.go:117] "RemoveContainer" containerID="cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02" Feb 16 20:56:45.968189 master-0 kubenswrapper[4119]: I0216 20:56:45.968138 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02"} err="failed to get container status \"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02\": rpc error: code = NotFound desc = could not find container \"cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02\": container with ID starting with cf3ef7ff07670c6235cc70940e91bc71a2c0937b891a353fc06cca064b352c02 not found: ID does not exist" Feb 16 20:56:45.968189 master-0 kubenswrapper[4119]: I0216 20:56:45.968175 4119 scope.go:117] "RemoveContainer" containerID="db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658" Feb 16 20:56:45.968524 master-0 kubenswrapper[4119]: I0216 20:56:45.968469 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658"} err="failed to get container status \"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658\": rpc error: code = NotFound desc = could not find container \"db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658\": container with ID starting with db6fd5d74942243b7f89cbfd43af4aad435325f09cda78ce2228a62a9bfb6658 not found: ID does not exist" Feb 16 20:56:45.968524 master-0 kubenswrapper[4119]: I0216 20:56:45.968508 4119 scope.go:117] "RemoveContainer" containerID="6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e" Feb 16 20:56:45.969262 master-0 kubenswrapper[4119]: I0216 20:56:45.969014 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e"} err="failed to get container status \"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e\": rpc error: code = NotFound desc = could not find container \"6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e\": container with ID starting with 6b2cf35b1434e08d715cefa4a2f6cbbeb24a8eb2ba09ad0af4dc750f401cd14e not found: ID does not exist" Feb 16 20:56:45.969262 master-0 kubenswrapper[4119]: I0216 20:56:45.969119 4119 scope.go:117] "RemoveContainer" containerID="c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab" Feb 16 20:56:45.969716 master-0 kubenswrapper[4119]: I0216 20:56:45.969381 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab"} err="failed to get container status \"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab\": rpc error: code = NotFound desc = could not find container \"c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab\": container with ID starting with c447180e1d8642b6a431d4a2c54bb2efd073cf584c6f01f6587ad07621933cab not found: ID does not exist" Feb 16 20:56:45.969716 master-0 kubenswrapper[4119]: I0216 20:56:45.969408 4119 scope.go:117] "RemoveContainer" containerID="dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e" Feb 16 20:56:45.969986 master-0 kubenswrapper[4119]: I0216 20:56:45.969951 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e"} err="failed to get container status \"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e\": rpc error: code = NotFound desc = could not find container \"dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e\": container with ID starting with dc6d548c0a893542bb2112f27f55208328680bbd64d7d2b1e4a616409ceb0a2e not found: ID does not exist" Feb 16 20:56:45.969986 master-0 kubenswrapper[4119]: I0216 20:56:45.969980 4119 scope.go:117] "RemoveContainer" containerID="f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997" Feb 16 20:56:45.970390 master-0 kubenswrapper[4119]: I0216 20:56:45.970329 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997"} err="failed to get container status \"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997\": rpc error: code = NotFound desc = could not find container \"f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997\": container with ID starting with f122b42052e1f9e9edda51e4393b4baf3b4c7a64f75352a452b8a1a817456997 not found: ID does not exist" Feb 16 20:56:45.970390 master-0 kubenswrapper[4119]: I0216 20:56:45.970372 4119 scope.go:117] "RemoveContainer" containerID="ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4" Feb 16 20:56:45.970874 master-0 kubenswrapper[4119]: I0216 20:56:45.970824 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4"} err="failed to get container status \"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4\": rpc error: code = NotFound desc = could not find container \"ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4\": container with ID starting with ddd0bb0c50882befdc092ca5a8225e23845140bbc5d679706a48573fe95ad8c4 not found: ID does not exist" Feb 16 20:56:45.970874 master-0 kubenswrapper[4119]: I0216 20:56:45.970861 4119 scope.go:117] "RemoveContainer" containerID="03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241" Feb 16 20:56:45.971212 master-0 kubenswrapper[4119]: I0216 20:56:45.971161 4119 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241"} err="failed to get container status \"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241\": rpc error: code = NotFound desc = could not find container \"03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241\": container with ID starting with 03b8d1bc424f2de44d2bd7f1ae5be32b34c7321baa8dd869b12be7982656b241 not found: ID does not exist" Feb 16 20:56:46.013920 master-0 kubenswrapper[4119]: I0216 20:56:46.013844 4119 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.013920 master-0 kubenswrapper[4119]: I0216 20:56:46.013895 4119 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-env-overrides\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.013920 master-0 kubenswrapper[4119]: I0216 20:56:46.013910 4119 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.013920 master-0 kubenswrapper[4119]: I0216 20:56:46.013930 4119 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-slash\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.013920 master-0 kubenswrapper[4119]: I0216 20:56:46.013947 4119 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-netns\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.014416 master-0 kubenswrapper[4119]: I0216 20:56:46.013962 4119 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-log-socket\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.014416 master-0 kubenswrapper[4119]: I0216 20:56:46.013976 4119 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.014416 master-0 kubenswrapper[4119]: I0216 20:56:46.013990 4119 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-systemd\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.014416 master-0 kubenswrapper[4119]: I0216 20:56:46.014002 4119 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.014416 master-0 kubenswrapper[4119]: I0216 20:56:46.014015 4119 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.014416 master-0 kubenswrapper[4119]: I0216 20:56:46.014030 4119 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwhxb\" (UniqueName: \"kubernetes.io/projected/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-kube-api-access-cwhxb\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.014416 master-0 kubenswrapper[4119]: I0216 20:56:46.014045 4119 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.014416 master-0 kubenswrapper[4119]: I0216 20:56:46.014062 4119 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-node-log\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.014416 master-0 kubenswrapper[4119]: I0216 20:56:46.014075 4119 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 16 20:56:46.115167 master-0 kubenswrapper[4119]: I0216 20:56:46.115050 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:46.154191 master-0 kubenswrapper[4119]: I0216 20:56:46.154084 4119 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lprkk"] Feb 16 20:56:46.160088 master-0 kubenswrapper[4119]: I0216 20:56:46.160028 4119 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lprkk"] Feb 16 20:56:46.818192 master-0 kubenswrapper[4119]: I0216 20:56:46.818087 4119 generic.go:334] "Generic (PLEG): container finished" podID="69785167-b4ae-415b-bdcb-029f62effe78" containerID="d7022d510b5111f523030386d2b2e3f81b8551ed9e8be0ecf6a80ac34378ca5e" exitCode=0 Feb 16 20:56:46.818192 master-0 kubenswrapper[4119]: I0216 20:56:46.818181 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerDied","Data":"d7022d510b5111f523030386d2b2e3f81b8551ed9e8be0ecf6a80ac34378ca5e"} Feb 16 20:56:46.820353 master-0 kubenswrapper[4119]: I0216 20:56:46.818249 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"9e9fb9a8fc61dba0936cd38d7b843d3efbdecc6ba9ec73f7423569f6305a4740"} Feb 16 20:56:46.874724 master-0 kubenswrapper[4119]: I0216 20:56:46.874616 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:46.874884 master-0 kubenswrapper[4119]: E0216 20:56:46.874793 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:46.882393 master-0 kubenswrapper[4119]: I0216 20:56:46.882333 4119 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4" path="/var/lib/kubelet/pods/7d424b00-e8c1-41d7-92e5-5d02a7a7c2d4/volumes" Feb 16 20:56:47.830557 master-0 kubenswrapper[4119]: I0216 20:56:47.830130 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"c630a4ba806244d201f002a158513dc016fe5c3b6daba273e1f23f6333686b88"} Feb 16 20:56:47.830557 master-0 kubenswrapper[4119]: I0216 20:56:47.830522 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"7939368a66df752fec666f55357f94fd22b560b8a120e0b62d09790f086413b5"} Feb 16 20:56:47.830557 master-0 kubenswrapper[4119]: I0216 20:56:47.830547 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"a3f154ed1fbadfde6c14b6c55646e156b1487b3d1f2a2888af5abc441cb159f2"} Feb 16 20:56:47.830557 master-0 kubenswrapper[4119]: I0216 20:56:47.830568 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"c109743f9e34f4b558b3bf44dfa939dc541314f56b2be407503cf4a64de5777a"} Feb 16 20:56:47.830557 master-0 kubenswrapper[4119]: I0216 20:56:47.830584 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"a6cd57c4b0fc1e7c2d930e1dc1ce1a766a873d8f44fcc9636a87f988589d8813"} Feb 16 20:56:47.830557 master-0 kubenswrapper[4119]: I0216 20:56:47.830597 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"4f1cce04d21916f0d92a98ba8e6b09901027aaa8cc2b129f507dcc8d25ef4a4d"} Feb 16 20:56:47.874409 master-0 kubenswrapper[4119]: I0216 20:56:47.874321 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:47.874818 master-0 kubenswrapper[4119]: E0216 20:56:47.874528 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:48.875704 master-0 kubenswrapper[4119]: I0216 20:56:48.875566 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:48.877035 master-0 kubenswrapper[4119]: E0216 20:56:48.875910 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:49.847933 master-0 kubenswrapper[4119]: I0216 20:56:49.847687 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"3ded81fde954498e8c659f19e567426ed192fd804a885a7ac139c978535050d2"} Feb 16 20:56:49.875493 master-0 kubenswrapper[4119]: I0216 20:56:49.875382 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:49.876486 master-0 kubenswrapper[4119]: E0216 20:56:49.875623 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:50.874738 master-0 kubenswrapper[4119]: I0216 20:56:50.874550 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:50.875167 master-0 kubenswrapper[4119]: E0216 20:56:50.874796 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:51.774161 master-0 kubenswrapper[4119]: I0216 20:56:51.774024 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:56:51.775037 master-0 kubenswrapper[4119]: E0216 20:56:51.774274 4119 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:56:51.775037 master-0 kubenswrapper[4119]: E0216 20:56:51.774453 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:57:55.774363814 +0000 UTC m=+171.704289862 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:56:51.875398 master-0 kubenswrapper[4119]: I0216 20:56:51.875315 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:51.875634 master-0 kubenswrapper[4119]: E0216 20:56:51.875542 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:52.176874 master-0 kubenswrapper[4119]: I0216 20:56:52.176756 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:52.177016 master-0 kubenswrapper[4119]: E0216 20:56:52.176944 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 20:56:52.177016 master-0 kubenswrapper[4119]: E0216 20:56:52.176971 4119 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 20:56:52.177016 master-0 kubenswrapper[4119]: E0216 20:56:52.176986 4119 projected.go:194] Error preparing data for projected volume kube-api-access-kcp5t for pod openshift-network-diagnostics/network-check-target-68c25: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:52.177170 master-0 kubenswrapper[4119]: E0216 20:56:52.177057 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t podName:0d903d23-8e0b-424b-bcd0-e0a00f306e49 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:24.177034385 +0000 UTC m=+140.106960413 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-kcp5t" (UniqueName: "kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t") pod "network-check-target-68c25" (UID: "0d903d23-8e0b-424b-bcd0-e0a00f306e49") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 20:56:52.872142 master-0 kubenswrapper[4119]: I0216 20:56:52.870772 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"655a7675b66526d164c70e9b200b05c778827418e4c84a28b4e335f8dfc72ff8"} Feb 16 20:56:52.872142 master-0 kubenswrapper[4119]: I0216 20:56:52.871318 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:52.872142 master-0 kubenswrapper[4119]: I0216 20:56:52.871348 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:52.875627 master-0 kubenswrapper[4119]: I0216 20:56:52.875095 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:52.875627 master-0 kubenswrapper[4119]: E0216 20:56:52.875339 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:52.953969 master-0 kubenswrapper[4119]: I0216 20:56:52.953887 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:53.002758 master-0 kubenswrapper[4119]: I0216 20:56:53.002251 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" podStartSLOduration=8.00222678 podStartE2EDuration="8.00222678s" podCreationTimestamp="2026-02-16 20:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:56:52.903746113 +0000 UTC m=+108.833672181" watchObservedRunningTime="2026-02-16 20:56:53.00222678 +0000 UTC m=+108.932152808" Feb 16 20:56:53.438112 master-0 kubenswrapper[4119]: I0216 20:56:53.437565 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-68c25"] Feb 16 20:56:53.438418 master-0 kubenswrapper[4119]: I0216 20:56:53.438161 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:53.438418 master-0 kubenswrapper[4119]: E0216 20:56:53.438252 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:53.442006 master-0 kubenswrapper[4119]: I0216 20:56:53.441935 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-42bw7"] Feb 16 20:56:53.874788 master-0 kubenswrapper[4119]: I0216 20:56:53.874691 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:53.875849 master-0 kubenswrapper[4119]: E0216 20:56:53.874917 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:53.875849 master-0 kubenswrapper[4119]: I0216 20:56:53.875396 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:53.910553 master-0 kubenswrapper[4119]: I0216 20:56:53.910473 4119 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:56:54.875334 master-0 kubenswrapper[4119]: I0216 20:56:54.875286 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:54.876959 master-0 kubenswrapper[4119]: E0216 20:56:54.876867 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:55.874822 master-0 kubenswrapper[4119]: I0216 20:56:55.874744 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:55.875093 master-0 kubenswrapper[4119]: E0216 20:56:55.874893 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:56.874861 master-0 kubenswrapper[4119]: I0216 20:56:56.874701 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:56.874861 master-0 kubenswrapper[4119]: E0216 20:56:56.874850 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-68c25" podUID="0d903d23-8e0b-424b-bcd0-e0a00f306e49" Feb 16 20:56:57.875143 master-0 kubenswrapper[4119]: I0216 20:56:57.874539 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:57.876156 master-0 kubenswrapper[4119]: E0216 20:56:57.875267 4119 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-42bw7" podUID="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" Feb 16 20:56:57.904276 master-0 kubenswrapper[4119]: I0216 20:56:57.904190 4119 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 16 20:56:57.904428 master-0 kubenswrapper[4119]: I0216 20:56:57.904390 4119 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 16 20:56:57.949702 master-0 kubenswrapper[4119]: I0216 20:56:57.949599 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq"] Feb 16 20:56:57.950288 master-0 kubenswrapper[4119]: I0216 20:56:57.950234 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:56:57.952084 master-0 kubenswrapper[4119]: I0216 20:56:57.952009 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 20:56:57.953237 master-0 kubenswrapper[4119]: I0216 20:56:57.953181 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 20:56:57.954781 master-0 kubenswrapper[4119]: I0216 20:56:57.954731 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 20:56:57.958019 master-0 kubenswrapper[4119]: I0216 20:56:57.957795 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-cdltb"] Feb 16 20:56:57.961692 master-0 kubenswrapper[4119]: I0216 20:56:57.958473 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:56:57.961692 master-0 kubenswrapper[4119]: I0216 20:56:57.960294 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8"] Feb 16 20:56:57.961692 master-0 kubenswrapper[4119]: I0216 20:56:57.960921 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:56:57.961692 master-0 kubenswrapper[4119]: I0216 20:56:57.961233 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld"] Feb 16 20:56:57.961989 master-0 kubenswrapper[4119]: I0216 20:56:57.961934 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p"] Feb 16 20:56:57.963413 master-0 kubenswrapper[4119]: I0216 20:56:57.962252 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:56:57.963413 master-0 kubenswrapper[4119]: I0216 20:56:57.962486 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:56:57.963413 master-0 kubenswrapper[4119]: I0216 20:56:57.962556 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 20:56:57.963413 master-0 kubenswrapper[4119]: I0216 20:56:57.962720 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz"] Feb 16 20:56:57.963413 master-0 kubenswrapper[4119]: I0216 20:56:57.962789 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 20:56:57.963413 master-0 kubenswrapper[4119]: I0216 20:56:57.962913 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 20:56:57.965481 master-0 kubenswrapper[4119]: I0216 20:56:57.965420 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:57.967534 master-0 kubenswrapper[4119]: I0216 20:56:57.967485 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.974429 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.974473 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.974720 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.974900 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.975364 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.975685 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.976050 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.976057 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.976309 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn"] Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.976353 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.976727 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.976867 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.977117 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.977708 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4"] Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.978723 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:56:57.978835 master-0 kubenswrapper[4119]: I0216 20:56:57.978772 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:57.979724 master-0 kubenswrapper[4119]: I0216 20:56:57.979477 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 20:56:57.981713 master-0 kubenswrapper[4119]: I0216 20:56:57.979838 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6"] Feb 16 20:56:57.981713 master-0 kubenswrapper[4119]: I0216 20:56:57.980149 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 20:56:57.981713 master-0 kubenswrapper[4119]: I0216 20:56:57.980153 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 20:56:57.981713 master-0 kubenswrapper[4119]: I0216 20:56:57.980427 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 20:56:57.981713 master-0 kubenswrapper[4119]: I0216 20:56:57.980699 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 20:56:57.990455 master-0 kubenswrapper[4119]: I0216 20:56:57.990363 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:56:57.992692 master-0 kubenswrapper[4119]: I0216 20:56:57.991821 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft"] Feb 16 20:56:57.992692 master-0 kubenswrapper[4119]: I0216 20:56:57.992339 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:56:57.992859 master-0 kubenswrapper[4119]: I0216 20:56:57.992755 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 20:56:57.999386 master-0 kubenswrapper[4119]: I0216 20:56:57.998560 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9"] Feb 16 20:56:57.999386 master-0 kubenswrapper[4119]: I0216 20:56:57.999250 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:56:58.002443 master-0 kubenswrapper[4119]: I0216 20:56:58.000559 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq"] Feb 16 20:56:58.002443 master-0 kubenswrapper[4119]: I0216 20:56:58.001303 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:56:58.002443 master-0 kubenswrapper[4119]: I0216 20:56:58.001357 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 20:56:58.002443 master-0 kubenswrapper[4119]: I0216 20:56:58.001530 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 20:56:58.002443 master-0 kubenswrapper[4119]: I0216 20:56:58.001827 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 20:56:58.002443 master-0 kubenswrapper[4119]: I0216 20:56:58.002082 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 20:56:58.002443 master-0 kubenswrapper[4119]: I0216 20:56:58.002278 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 20:56:58.002849 master-0 kubenswrapper[4119]: I0216 20:56:58.002480 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 20:56:58.002849 master-0 kubenswrapper[4119]: I0216 20:56:58.002579 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw"] Feb 16 20:56:58.002849 master-0 kubenswrapper[4119]: I0216 20:56:58.002783 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 20:56:58.004892 master-0 kubenswrapper[4119]: I0216 20:56:58.002982 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 20:56:58.004892 master-0 kubenswrapper[4119]: I0216 20:56:58.003158 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:56:58.004892 master-0 kubenswrapper[4119]: I0216 20:56:58.003185 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 20:56:58.004892 master-0 kubenswrapper[4119]: I0216 20:56:58.003428 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 20:56:58.004892 master-0 kubenswrapper[4119]: I0216 20:56:58.003707 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 20:56:58.004892 master-0 kubenswrapper[4119]: I0216 20:56:58.003889 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 20:56:58.004892 master-0 kubenswrapper[4119]: I0216 20:56:58.004070 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 20:56:58.004892 master-0 kubenswrapper[4119]: I0216 20:56:58.004287 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 20:56:58.006079 master-0 kubenswrapper[4119]: I0216 20:56:58.005891 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 20:56:58.006138 master-0 kubenswrapper[4119]: I0216 20:56:58.006002 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 20:56:58.009681 master-0 kubenswrapper[4119]: I0216 20:56:58.006441 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 20:56:58.009681 master-0 kubenswrapper[4119]: I0216 20:56:58.009483 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 20:56:58.009811 master-0 kubenswrapper[4119]: I0216 20:56:58.009768 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 20:56:58.010987 master-0 kubenswrapper[4119]: I0216 20:56:58.010073 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 20:56:58.010987 master-0 kubenswrapper[4119]: I0216 20:56:58.010095 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 20:56:58.010987 master-0 kubenswrapper[4119]: I0216 20:56:58.010169 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 20:56:58.010987 master-0 kubenswrapper[4119]: I0216 20:56:58.010184 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 20:56:58.015116 master-0 kubenswrapper[4119]: I0216 20:56:58.015076 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 20:56:58.015433 master-0 kubenswrapper[4119]: I0216 20:56:58.015399 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-z46jt"] Feb 16 20:56:58.017714 master-0 kubenswrapper[4119]: I0216 20:56:58.016167 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-8gnq5"] Feb 16 20:56:58.017714 master-0 kubenswrapper[4119]: I0216 20:56:58.016363 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:56:58.017714 master-0 kubenswrapper[4119]: I0216 20:56:58.017284 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.021340 master-0 kubenswrapper[4119]: I0216 20:56:58.018777 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb"] Feb 16 20:56:58.021340 master-0 kubenswrapper[4119]: I0216 20:56:58.019245 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.021340 master-0 kubenswrapper[4119]: I0216 20:56:58.021314 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 20:56:58.021496 master-0 kubenswrapper[4119]: I0216 20:56:58.021375 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 20:56:58.021561 master-0 kubenswrapper[4119]: I0216 20:56:58.021331 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 20:56:58.021727 master-0 kubenswrapper[4119]: I0216 20:56:58.021684 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 20:56:58.021795 master-0 kubenswrapper[4119]: I0216 20:56:58.021754 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 20:56:58.021882 master-0 kubenswrapper[4119]: I0216 20:56:58.021856 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 20:56:58.023858 master-0 kubenswrapper[4119]: I0216 20:56:58.023824 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv"] Feb 16 20:56:58.024141 master-0 kubenswrapper[4119]: I0216 20:56:58.024113 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 20:56:58.024294 master-0 kubenswrapper[4119]: I0216 20:56:58.024256 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 20:56:58.024355 master-0 kubenswrapper[4119]: I0216 20:56:58.024327 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 20:56:58.024662 master-0 kubenswrapper[4119]: I0216 20:56:58.024609 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" Feb 16 20:56:58.029697 master-0 kubenswrapper[4119]: I0216 20:56:58.028935 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 20:56:58.030167 master-0 kubenswrapper[4119]: I0216 20:56:58.029750 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 20:56:58.036885 master-0 kubenswrapper[4119]: I0216 20:56:58.033640 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96"] Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.040488 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.040670 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-profile-collector-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.040745 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-service-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.040792 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.040831 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-client\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.040870 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.040920 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-profile-collector-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.040952 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ec7dd4ea-a139-45d4-96a4-506da1567292-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.041012 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qd6r\" (UniqueName: \"kubernetes.io/projected/2506c282-0b37-4ece-8a0c-885d0b7f7901-kube-api-access-6qd6r\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.041052 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c20f63-9bfb-4703-94d5-0c65475e08d1-serving-cert\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.041097 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-config\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.041175 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7333319-3fe6-4b3f-b600-6b6df49fcaff-config\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.041216 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.041259 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vmp\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-kube-api-access-z9vmp\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.041679 master-0 kubenswrapper[4119]: I0216 20:56:58.041299 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e0227bc-63f5-48be-95dc-1323a2b2e327-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.042115 master-0 kubenswrapper[4119]: I0216 20:56:58.041332 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:56:58.042115 master-0 kubenswrapper[4119]: I0216 20:56:58.041371 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx4tz\" (UniqueName: \"kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:56:58.042115 master-0 kubenswrapper[4119]: I0216 20:56:58.041422 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-serving-cert\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.042115 master-0 kubenswrapper[4119]: I0216 20:56:58.041461 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:56:58.042115 master-0 kubenswrapper[4119]: I0216 20:56:58.041497 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jt7h\" (UniqueName: \"kubernetes.io/projected/ec7dd4ea-a139-45d4-96a4-506da1567292-kube-api-access-9jt7h\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:56:58.042115 master-0 kubenswrapper[4119]: I0216 20:56:58.041532 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.042115 master-0 kubenswrapper[4119]: I0216 20:56:58.041785 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7333319-3fe6-4b3f-b600-6b6df49fcaff-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:56:58.042115 master-0 kubenswrapper[4119]: I0216 20:56:58.041837 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrc7l\" (UniqueName: \"kubernetes.io/projected/2e618c5c-52be-4b52-b426-b92555dee9de-kube-api-access-nrc7l\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:56:58.042115 master-0 kubenswrapper[4119]: I0216 20:56:58.042088 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:56:58.043617 master-0 kubenswrapper[4119]: I0216 20:56:58.042600 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-config\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:56:58.043617 master-0 kubenswrapper[4119]: I0216 20:56:58.042761 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:56:58.043617 master-0 kubenswrapper[4119]: I0216 20:56:58.043095 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-trusted-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.043856 master-0 kubenswrapper[4119]: I0216 20:56:58.043823 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:56:58.043910 master-0 kubenswrapper[4119]: I0216 20:56:58.043876 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx2kd\" (UniqueName: \"kubernetes.io/projected/c7333319-3fe6-4b3f-b600-6b6df49fcaff-kube-api-access-qx2kd\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:56:58.043940 master-0 kubenswrapper[4119]: I0216 20:56:58.043921 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll4rg\" (UniqueName: \"kubernetes.io/projected/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-kube-api-access-ll4rg\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.043975 master-0 kubenswrapper[4119]: I0216 20:56:58.043959 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wrr\" (UniqueName: \"kubernetes.io/projected/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-kube-api-access-p7wrr\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:56:58.044018 master-0 kubenswrapper[4119]: I0216 20:56:58.043994 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.044057 master-0 kubenswrapper[4119]: I0216 20:56:58.044027 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.044085 master-0 kubenswrapper[4119]: I0216 20:56:58.044062 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:56:58.044115 master-0 kubenswrapper[4119]: I0216 20:56:58.044099 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:56:58.044194 master-0 kubenswrapper[4119]: I0216 20:56:58.044135 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sd27\" (UniqueName: \"kubernetes.io/projected/a4c9b781-14c0-469c-bb9e-0c3982a04520-kube-api-access-8sd27\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:56:58.044194 master-0 kubenswrapper[4119]: I0216 20:56:58.044173 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjsnz\" (UniqueName: \"kubernetes.io/projected/27c20f63-9bfb-4703-94d5-0c65475e08d1-kube-api-access-hjsnz\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.044284 master-0 kubenswrapper[4119]: I0216 20:56:58.044197 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-config\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:56:58.044284 master-0 kubenswrapper[4119]: I0216 20:56:58.044234 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkz65\" (UniqueName: \"kubernetes.io/projected/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-kube-api-access-mkz65\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:56:58.044574 master-0 kubenswrapper[4119]: I0216 20:56:58.044555 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.044636 master-0 kubenswrapper[4119]: I0216 20:56:58.044590 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2506c282-0b37-4ece-8a0c-885d0b7f7901-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.044984 master-0 kubenswrapper[4119]: I0216 20:56:58.044957 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-config\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.045336 master-0 kubenswrapper[4119]: I0216 20:56:58.045292 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 20:56:58.045515 master-0 kubenswrapper[4119]: I0216 20:56:58.045475 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 20:56:58.045812 master-0 kubenswrapper[4119]: I0216 20:56:58.045780 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d"] Feb 16 20:56:58.045969 master-0 kubenswrapper[4119]: I0216 20:56:58.045945 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 20:56:58.054928 master-0 kubenswrapper[4119]: I0216 20:56:58.048083 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4"] Feb 16 20:56:58.054928 master-0 kubenswrapper[4119]: I0216 20:56:58.048874 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.054928 master-0 kubenswrapper[4119]: I0216 20:56:58.049346 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:56:58.054928 master-0 kubenswrapper[4119]: I0216 20:56:58.049937 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl"] Feb 16 20:56:58.054928 master-0 kubenswrapper[4119]: I0216 20:56:58.051800 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:56:58.054928 master-0 kubenswrapper[4119]: I0216 20:56:58.052221 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g"] Feb 16 20:56:58.054928 master-0 kubenswrapper[4119]: I0216 20:56:58.052278 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 20:56:58.054928 master-0 kubenswrapper[4119]: I0216 20:56:58.053878 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:56:58.055227 master-0 kubenswrapper[4119]: I0216 20:56:58.055025 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq"] Feb 16 20:56:58.055227 master-0 kubenswrapper[4119]: I0216 20:56:58.055108 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 20:56:58.064069 master-0 kubenswrapper[4119]: I0216 20:56:58.064022 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 20:56:58.066424 master-0 kubenswrapper[4119]: I0216 20:56:58.066377 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld"] Feb 16 20:56:58.066484 master-0 kubenswrapper[4119]: I0216 20:56:58.066432 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz"] Feb 16 20:56:58.066484 master-0 kubenswrapper[4119]: I0216 20:56:58.066449 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-cdltb"] Feb 16 20:56:58.069359 master-0 kubenswrapper[4119]: I0216 20:56:58.068955 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 20:56:58.069359 master-0 kubenswrapper[4119]: I0216 20:56:58.068959 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 20:56:58.069359 master-0 kubenswrapper[4119]: I0216 20:56:58.069135 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 20:56:58.069359 master-0 kubenswrapper[4119]: I0216 20:56:58.069145 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 20:56:58.069359 master-0 kubenswrapper[4119]: I0216 20:56:58.069250 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 20:56:58.069359 master-0 kubenswrapper[4119]: I0216 20:56:58.069325 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 20:56:58.069359 master-0 kubenswrapper[4119]: I0216 20:56:58.069349 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 20:56:58.069359 master-0 kubenswrapper[4119]: I0216 20:56:58.069377 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 20:56:58.069611 master-0 kubenswrapper[4119]: I0216 20:56:58.069428 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 20:56:58.069611 master-0 kubenswrapper[4119]: I0216 20:56:58.069438 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 20:56:58.069611 master-0 kubenswrapper[4119]: I0216 20:56:58.069572 4119 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-b68cj"] Feb 16 20:56:58.071532 master-0 kubenswrapper[4119]: I0216 20:56:58.069677 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 20:56:58.071532 master-0 kubenswrapper[4119]: I0216 20:56:58.070023 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:56:58.071592 master-0 kubenswrapper[4119]: I0216 20:56:58.071551 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 20:56:58.073080 master-0 kubenswrapper[4119]: I0216 20:56:58.072797 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn"] Feb 16 20:56:58.074345 master-0 kubenswrapper[4119]: I0216 20:56:58.074303 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8"] Feb 16 20:56:58.075575 master-0 kubenswrapper[4119]: I0216 20:56:58.075543 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p"] Feb 16 20:56:58.077247 master-0 kubenswrapper[4119]: I0216 20:56:58.077212 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4"] Feb 16 20:56:58.079036 master-0 kubenswrapper[4119]: I0216 20:56:58.079004 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft"] Feb 16 20:56:58.081039 master-0 kubenswrapper[4119]: I0216 20:56:58.080166 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq"] Feb 16 20:56:58.083192 master-0 kubenswrapper[4119]: I0216 20:56:58.082049 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-z46jt"] Feb 16 20:56:58.083611 master-0 kubenswrapper[4119]: I0216 20:56:58.083590 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-8gnq5"] Feb 16 20:56:58.084516 master-0 kubenswrapper[4119]: I0216 20:56:58.084471 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl"] Feb 16 20:56:58.085800 master-0 kubenswrapper[4119]: I0216 20:56:58.085469 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb"] Feb 16 20:56:58.088448 master-0 kubenswrapper[4119]: I0216 20:56:58.088418 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96"] Feb 16 20:56:58.089592 master-0 kubenswrapper[4119]: I0216 20:56:58.089558 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g"] Feb 16 20:56:58.090929 master-0 kubenswrapper[4119]: I0216 20:56:58.090891 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6"] Feb 16 20:56:58.091523 master-0 kubenswrapper[4119]: I0216 20:56:58.091467 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw"] Feb 16 20:56:58.092190 master-0 kubenswrapper[4119]: I0216 20:56:58.092157 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9"] Feb 16 20:56:58.092905 master-0 kubenswrapper[4119]: I0216 20:56:58.092866 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv"] Feb 16 20:56:58.093832 master-0 kubenswrapper[4119]: I0216 20:56:58.093794 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d"] Feb 16 20:56:58.096368 master-0 kubenswrapper[4119]: I0216 20:56:58.096239 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4"] Feb 16 20:56:58.145816 master-0 kubenswrapper[4119]: I0216 20:56:58.145769 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-profile-collector-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:56:58.145816 master-0 kubenswrapper[4119]: I0216 20:56:58.145808 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.145970 master-0 kubenswrapper[4119]: I0216 20:56:58.145840 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bcmr\" (UniqueName: \"kubernetes.io/projected/695549c8-d1fc-429d-9c9f-0a5915dc6074-kube-api-access-7bcmr\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:56:58.145970 master-0 kubenswrapper[4119]: I0216 20:56:58.145865 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cef33294-81fb-41a2-811d-2565f94514d1-trusted-ca\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.145970 master-0 kubenswrapper[4119]: I0216 20:56:58.145885 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e062e07-8076-444c-b476-4eb2848e9613-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:56:58.145970 master-0 kubenswrapper[4119]: I0216 20:56:58.145905 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7adbe32-b8b9-438e-a2e3-f93146a97424-config\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:56:58.145970 master-0 kubenswrapper[4119]: I0216 20:56:58.145926 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ec7dd4ea-a139-45d4-96a4-506da1567292-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:56:58.146169 master-0 kubenswrapper[4119]: I0216 20:56:58.145948 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qd6r\" (UniqueName: \"kubernetes.io/projected/2506c282-0b37-4ece-8a0c-885d0b7f7901-kube-api-access-6qd6r\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.146350 master-0 kubenswrapper[4119]: I0216 20:56:58.146296 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c20f63-9bfb-4703-94d5-0c65475e08d1-serving-cert\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146363 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146400 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b02b740-5698-4e9a-90fe-2873bd0b0958-config\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146457 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7333319-3fe6-4b3f-b600-6b6df49fcaff-config\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146501 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-config\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146529 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146554 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-bound-sa-token\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146593 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146618 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5e062e07-8076-444c-b476-4eb2848e9613-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146666 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9vmp\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-kube-api-access-z9vmp\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146694 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfmv6\" (UniqueName: \"kubernetes.io/projected/5e062e07-8076-444c-b476-4eb2848e9613-kube-api-access-dfmv6\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146728 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pw88\" (UniqueName: \"kubernetes.io/projected/2ab0a907-7abe-4808-ba21-bdda1506eae2-kube-api-access-9pw88\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146757 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e0227bc-63f5-48be-95dc-1323a2b2e327-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146787 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146817 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vklwz\" (UniqueName: \"kubernetes.io/projected/59237aa6-6250-4619-8ee5-abae59f04b57-kube-api-access-vklwz\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:56:58.147097 master-0 kubenswrapper[4119]: I0216 20:56:58.146842 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d9d71a7a-a751-4de4-9c76-9bac85fe0177-iptables-alerter-script\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.146870 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-serving-cert\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.146891 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.146913 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx4tz\" (UniqueName: \"kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.146935 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tklr\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-kube-api-access-5tklr\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.146962 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.146983 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.147004 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jt7h\" (UniqueName: \"kubernetes.io/projected/ec7dd4ea-a139-45d4-96a4-506da1567292-kube-api-access-9jt7h\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.147034 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b02b740-5698-4e9a-90fe-2873bd0b0958-serving-cert\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.147066 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7333319-3fe6-4b3f-b600-6b6df49fcaff-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: E0216 20:56:58.146297 4119 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.147091 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7adbe32-b8b9-438e-a2e3-f93146a97424-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: E0216 20:56:58.147121 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls podName:9e0227bc-63f5-48be-95dc-1323a2b2e327 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.647103052 +0000 UTC m=+114.577029070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-4gczb" (UID: "9e0227bc-63f5-48be-95dc-1323a2b2e327") : secret "image-registry-operator-tls" not found Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.147142 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d9d71a7a-a751-4de4-9c76-9bac85fe0177-host-slash\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.147169 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrc7l\" (UniqueName: \"kubernetes.io/projected/2e618c5c-52be-4b52-b426-b92555dee9de-kube-api-access-nrc7l\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:56:58.149486 master-0 kubenswrapper[4119]: I0216 20:56:58.147193 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-config\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147209 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147229 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7nmb\" (UniqueName: \"kubernetes.io/projected/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-kube-api-access-g7nmb\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147264 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147286 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67qzh\" (UniqueName: \"kubernetes.io/projected/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-kube-api-access-67qzh\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147307 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147338 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-trusted-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147366 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab0a907-7abe-4808-ba21-bdda1506eae2-serving-cert\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147391 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147418 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4rg\" (UniqueName: \"kubernetes.io/projected/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-kube-api-access-ll4rg\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147443 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx2kd\" (UniqueName: \"kubernetes.io/projected/c7333319-3fe6-4b3f-b600-6b6df49fcaff-kube-api-access-qx2kd\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147562 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147627 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147738 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wrr\" (UniqueName: \"kubernetes.io/projected/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-kube-api-access-p7wrr\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:56:58.154568 master-0 kubenswrapper[4119]: I0216 20:56:58.147819 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/695549c8-d1fc-429d-9c9f-0a5915dc6074-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.147878 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/695549c8-d1fc-429d-9c9f-0a5915dc6074-config\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.147913 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.147947 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.147983 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b02b740-5698-4e9a-90fe-2873bd0b0958-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.148024 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59237aa6-6250-4619-8ee5-abae59f04b57-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.148062 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sd27\" (UniqueName: \"kubernetes.io/projected/a4c9b781-14c0-469c-bb9e-0c3982a04520-kube-api-access-8sd27\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.148119 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjsnz\" (UniqueName: \"kubernetes.io/projected/27c20f63-9bfb-4703-94d5-0c65475e08d1-kube-api-access-hjsnz\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.148219 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-config\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.148260 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkz65\" (UniqueName: \"kubernetes.io/projected/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-kube-api-access-mkz65\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.148295 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkdzb\" (UniqueName: \"kubernetes.io/projected/d9d71a7a-a751-4de4-9c76-9bac85fe0177-kube-api-access-jkdzb\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.148325 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.148348 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2506c282-0b37-4ece-8a0c-885d0b7f7901-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.148375 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-config\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.148399 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/59237aa6-6250-4619-8ee5-abae59f04b57-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:56:58.155059 master-0 kubenswrapper[4119]: I0216 20:56:58.148429 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab0a907-7abe-4808-ba21-bdda1506eae2-config\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: I0216 20:56:58.148454 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw9lp\" (UniqueName: \"kubernetes.io/projected/4085413c-9af1-4d2a-ba0f-33b42025cb7f-kube-api-access-dw9lp\") pod \"csi-snapshot-controller-operator-7b87b97578-v7xdv\" (UID: \"4085413c-9af1-4d2a-ba0f-33b42025cb7f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: I0216 20:56:58.148476 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: I0216 20:56:58.148498 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-client\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: I0216 20:56:58.148523 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-profile-collector-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: I0216 20:56:58.148545 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-service-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: I0216 20:56:58.148568 4119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7adbe32-b8b9-438e-a2e3-f93146a97424-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: I0216 20:56:58.148580 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-config\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: I0216 20:56:58.150938 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7333319-3fe6-4b3f-b600-6b6df49fcaff-config\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: E0216 20:56:58.151677 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: E0216 20:56:58.151739 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert podName:a4c9b781-14c0-469c-bb9e-0c3982a04520 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.651715861 +0000 UTC m=+114.581641959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert") pod "olm-operator-6b56bd877c-vlhvq" (UID: "a4c9b781-14c0-469c-bb9e-0c3982a04520") : secret "olm-operator-serving-cert" not found Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: E0216 20:56:58.151784 4119 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: E0216 20:56:58.151926 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls podName:456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.651846815 +0000 UTC m=+114.581772863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls") pod "dns-operator-86b8869b79-cdltb" (UID: "456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd") : secret "metrics-tls" not found Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: I0216 20:56:58.154237 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-config\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: I0216 20:56:58.155290 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-trusted-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: E0216 20:56:58.155383 4119 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 20:56:58.155587 master-0 kubenswrapper[4119]: I0216 20:56:58.155394 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2506c282-0b37-4ece-8a0c-885d0b7f7901-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.157115 master-0 kubenswrapper[4119]: E0216 20:56:58.155437 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls podName:ec7dd4ea-a139-45d4-96a4-506da1567292 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.655422587 +0000 UTC m=+114.585348605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-w57zn" (UID: "ec7dd4ea-a139-45d4-96a4-506da1567292") : secret "cluster-monitoring-operator-tls" not found Feb 16 20:56:58.157115 master-0 kubenswrapper[4119]: I0216 20:56:58.155890 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-config\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:56:58.157115 master-0 kubenswrapper[4119]: I0216 20:56:58.156066 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c20f63-9bfb-4703-94d5-0c65475e08d1-serving-cert\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.157115 master-0 kubenswrapper[4119]: I0216 20:56:58.156352 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e0227bc-63f5-48be-95dc-1323a2b2e327-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.157115 master-0 kubenswrapper[4119]: E0216 20:56:58.156397 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 20:56:58.157115 master-0 kubenswrapper[4119]: E0216 20:56:58.156580 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert podName:2e618c5c-52be-4b52-b426-b92555dee9de nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.656561686 +0000 UTC m=+114.586487704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert") pod "catalog-operator-588944557d-h7xl6" (UID: "2e618c5c-52be-4b52-b426-b92555dee9de") : secret "catalog-operator-serving-cert" not found Feb 16 20:56:58.157329 master-0 kubenswrapper[4119]: E0216 20:56:58.157155 4119 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 20:56:58.157329 master-0 kubenswrapper[4119]: E0216 20:56:58.157251 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.657229993 +0000 UTC m=+114.587156001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "node-tuning-operator-tls" not found Feb 16 20:56:58.157329 master-0 kubenswrapper[4119]: I0216 20:56:58.157284 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ec7dd4ea-a139-45d4-96a4-506da1567292-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:56:58.157444 master-0 kubenswrapper[4119]: E0216 20:56:58.157356 4119 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 20:56:58.160619 master-0 kubenswrapper[4119]: I0216 20:56:58.157690 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-serving-cert\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.160619 master-0 kubenswrapper[4119]: I0216 20:56:58.157796 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7333319-3fe6-4b3f-b600-6b6df49fcaff-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:56:58.160619 master-0 kubenswrapper[4119]: I0216 20:56:58.159452 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-profile-collector-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:56:58.160619 master-0 kubenswrapper[4119]: I0216 20:56:58.160626 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:56:58.161799 master-0 kubenswrapper[4119]: I0216 20:56:58.161768 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.162034 master-0 kubenswrapper[4119]: E0216 20:56:58.161992 4119 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 20:56:58.162165 master-0 kubenswrapper[4119]: I0216 20:56:58.162101 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:56:58.162320 master-0 kubenswrapper[4119]: E0216 20:56:58.162283 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.662251933 +0000 UTC m=+114.592177981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "performance-addon-operator-webhook-cert" not found Feb 16 20:56:58.162378 master-0 kubenswrapper[4119]: I0216 20:56:58.162333 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-config\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.162378 master-0 kubenswrapper[4119]: E0216 20:56:58.162359 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.662346205 +0000 UTC m=+114.592272253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : secret "multus-admission-controller-secret" not found Feb 16 20:56:58.162552 master-0 kubenswrapper[4119]: I0216 20:56:58.162523 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.162799 master-0 kubenswrapper[4119]: I0216 20:56:58.162766 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-service-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.164820 master-0 kubenswrapper[4119]: I0216 20:56:58.164764 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-profile-collector-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:56:58.170072 master-0 kubenswrapper[4119]: I0216 20:56:58.170037 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-client\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.207387 master-0 kubenswrapper[4119]: I0216 20:56:58.207335 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wrr\" (UniqueName: \"kubernetes.io/projected/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-kube-api-access-p7wrr\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:56:58.207715 master-0 kubenswrapper[4119]: I0216 20:56:58.207668 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sd27\" (UniqueName: \"kubernetes.io/projected/a4c9b781-14c0-469c-bb9e-0c3982a04520-kube-api-access-8sd27\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:56:58.207939 master-0 kubenswrapper[4119]: I0216 20:56:58.207898 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkz65\" (UniqueName: \"kubernetes.io/projected/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-kube-api-access-mkz65\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:56:58.208364 master-0 kubenswrapper[4119]: I0216 20:56:58.208327 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4rg\" (UniqueName: \"kubernetes.io/projected/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-kube-api-access-ll4rg\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.209026 master-0 kubenswrapper[4119]: I0216 20:56:58.208940 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:56:58.209315 master-0 kubenswrapper[4119]: I0216 20:56:58.209276 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qd6r\" (UniqueName: \"kubernetes.io/projected/2506c282-0b37-4ece-8a0c-885d0b7f7901-kube-api-access-6qd6r\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.210946 master-0 kubenswrapper[4119]: I0216 20:56:58.210894 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.212279 master-0 kubenswrapper[4119]: I0216 20:56:58.212220 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx4tz\" (UniqueName: \"kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:56:58.212480 master-0 kubenswrapper[4119]: I0216 20:56:58.212440 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjsnz\" (UniqueName: \"kubernetes.io/projected/27c20f63-9bfb-4703-94d5-0c65475e08d1-kube-api-access-hjsnz\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.212854 master-0 kubenswrapper[4119]: I0216 20:56:58.212796 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9vmp\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-kube-api-access-z9vmp\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.214744 master-0 kubenswrapper[4119]: I0216 20:56:58.214701 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrc7l\" (UniqueName: \"kubernetes.io/projected/2e618c5c-52be-4b52-b426-b92555dee9de-kube-api-access-nrc7l\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:56:58.215032 master-0 kubenswrapper[4119]: I0216 20:56:58.214981 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jt7h\" (UniqueName: \"kubernetes.io/projected/ec7dd4ea-a139-45d4-96a4-506da1567292-kube-api-access-9jt7h\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:56:58.215586 master-0 kubenswrapper[4119]: I0216 20:56:58.215516 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx2kd\" (UniqueName: \"kubernetes.io/projected/c7333319-3fe6-4b3f-b600-6b6df49fcaff-kube-api-access-qx2kd\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:56:58.250004 master-0 kubenswrapper[4119]: I0216 20:56:58.249852 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7nmb\" (UniqueName: \"kubernetes.io/projected/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-kube-api-access-g7nmb\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:56:58.250004 master-0 kubenswrapper[4119]: I0216 20:56:58.249899 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:56:58.250004 master-0 kubenswrapper[4119]: I0216 20:56:58.249923 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67qzh\" (UniqueName: \"kubernetes.io/projected/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-kube-api-access-67qzh\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:56:58.250294 master-0 kubenswrapper[4119]: E0216 20:56:58.250029 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 20:56:58.250294 master-0 kubenswrapper[4119]: E0216 20:56:58.250085 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert podName:4b035e85-b2b0-4dee-bb86-3465fc4b98a8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.750064205 +0000 UTC m=+114.679990223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-9m94g" (UID: "4b035e85-b2b0-4dee-bb86-3465fc4b98a8") : secret "package-server-manager-serving-cert" not found Feb 16 20:56:58.250294 master-0 kubenswrapper[4119]: I0216 20:56:58.250100 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab0a907-7abe-4808-ba21-bdda1506eae2-serving-cert\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:56:58.250294 master-0 kubenswrapper[4119]: I0216 20:56:58.250143 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/695549c8-d1fc-429d-9c9f-0a5915dc6074-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:56:58.250294 master-0 kubenswrapper[4119]: I0216 20:56:58.250174 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/695549c8-d1fc-429d-9c9f-0a5915dc6074-config\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:56:58.250294 master-0 kubenswrapper[4119]: I0216 20:56:58.250219 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b02b740-5698-4e9a-90fe-2873bd0b0958-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:56:58.250614 master-0 kubenswrapper[4119]: I0216 20:56:58.250365 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59237aa6-6250-4619-8ee5-abae59f04b57-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:56:58.250614 master-0 kubenswrapper[4119]: I0216 20:56:58.250428 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkdzb\" (UniqueName: \"kubernetes.io/projected/d9d71a7a-a751-4de4-9c76-9bac85fe0177-kube-api-access-jkdzb\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:56:58.250722 master-0 kubenswrapper[4119]: I0216 20:56:58.250678 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab0a907-7abe-4808-ba21-bdda1506eae2-config\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:56:58.250836 master-0 kubenswrapper[4119]: I0216 20:56:58.250780 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/59237aa6-6250-4619-8ee5-abae59f04b57-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:56:58.250882 master-0 kubenswrapper[4119]: I0216 20:56:58.250834 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw9lp\" (UniqueName: \"kubernetes.io/projected/4085413c-9af1-4d2a-ba0f-33b42025cb7f-kube-api-access-dw9lp\") pod \"csi-snapshot-controller-operator-7b87b97578-v7xdv\" (UID: \"4085413c-9af1-4d2a-ba0f-33b42025cb7f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" Feb 16 20:56:58.250882 master-0 kubenswrapper[4119]: I0216 20:56:58.250859 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7adbe32-b8b9-438e-a2e3-f93146a97424-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:56:58.250882 master-0 kubenswrapper[4119]: I0216 20:56:58.250881 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cef33294-81fb-41a2-811d-2565f94514d1-trusted-ca\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.251266 master-0 kubenswrapper[4119]: I0216 20:56:58.251211 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bcmr\" (UniqueName: \"kubernetes.io/projected/695549c8-d1fc-429d-9c9f-0a5915dc6074-kube-api-access-7bcmr\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:56:58.251320 master-0 kubenswrapper[4119]: I0216 20:56:58.251267 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e062e07-8076-444c-b476-4eb2848e9613-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:56:58.251320 master-0 kubenswrapper[4119]: I0216 20:56:58.251300 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7adbe32-b8b9-438e-a2e3-f93146a97424-config\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:56:58.251566 master-0 kubenswrapper[4119]: I0216 20:56:58.251514 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:56:58.251566 master-0 kubenswrapper[4119]: I0216 20:56:58.251564 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b02b740-5698-4e9a-90fe-2873bd0b0958-config\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:56:58.251701 master-0 kubenswrapper[4119]: I0216 20:56:58.251588 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-bound-sa-token\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.251701 master-0 kubenswrapper[4119]: I0216 20:56:58.251645 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:56:58.251795 master-0 kubenswrapper[4119]: I0216 20:56:58.251716 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5e062e07-8076-444c-b476-4eb2848e9613-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:56:58.251795 master-0 kubenswrapper[4119]: E0216 20:56:58.251747 4119 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 20:56:58.251795 master-0 kubenswrapper[4119]: E0216 20:56:58.251792 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics podName:b28234d1-1d9a-4d9f-9ad1-e3c682bed492 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.751778689 +0000 UTC m=+114.681704707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-6rmhq" (UID: "b28234d1-1d9a-4d9f-9ad1-e3c682bed492") : secret "marketplace-operator-metrics" not found Feb 16 20:56:58.252917 master-0 kubenswrapper[4119]: I0216 20:56:58.251748 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfmv6\" (UniqueName: \"kubernetes.io/projected/5e062e07-8076-444c-b476-4eb2848e9613-kube-api-access-dfmv6\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:56:58.252917 master-0 kubenswrapper[4119]: I0216 20:56:58.252251 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pw88\" (UniqueName: \"kubernetes.io/projected/2ab0a907-7abe-4808-ba21-bdda1506eae2-kube-api-access-9pw88\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:56:58.252917 master-0 kubenswrapper[4119]: I0216 20:56:58.252283 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.252917 master-0 kubenswrapper[4119]: I0216 20:56:58.252304 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vklwz\" (UniqueName: \"kubernetes.io/projected/59237aa6-6250-4619-8ee5-abae59f04b57-kube-api-access-vklwz\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:56:58.252917 master-0 kubenswrapper[4119]: I0216 20:56:58.252351 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d9d71a7a-a751-4de4-9c76-9bac85fe0177-iptables-alerter-script\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:56:58.252917 master-0 kubenswrapper[4119]: I0216 20:56:58.252370 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tklr\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-kube-api-access-5tklr\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.252917 master-0 kubenswrapper[4119]: I0216 20:56:58.252391 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7adbe32-b8b9-438e-a2e3-f93146a97424-config\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:56:58.252917 master-0 kubenswrapper[4119]: I0216 20:56:58.252397 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/59237aa6-6250-4619-8ee5-abae59f04b57-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:56:58.252917 master-0 kubenswrapper[4119]: E0216 20:56:58.252531 4119 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:56:58.252917 master-0 kubenswrapper[4119]: E0216 20:56:58.252581 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls podName:cef33294-81fb-41a2-811d-2565f94514d1 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:58.752568528 +0000 UTC m=+114.682494636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls") pod "ingress-operator-c588d8cb4-6ps2d" (UID: "cef33294-81fb-41a2-811d-2565f94514d1") : secret "metrics-tls" not found Feb 16 20:56:58.253900 master-0 kubenswrapper[4119]: I0216 20:56:58.253044 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b02b740-5698-4e9a-90fe-2873bd0b0958-serving-cert\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:56:58.253900 master-0 kubenswrapper[4119]: I0216 20:56:58.253195 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7adbe32-b8b9-438e-a2e3-f93146a97424-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:56:58.253900 master-0 kubenswrapper[4119]: I0216 20:56:58.253226 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d9d71a7a-a751-4de4-9c76-9bac85fe0177-host-slash\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:56:58.253900 master-0 kubenswrapper[4119]: I0216 20:56:58.253303 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d9d71a7a-a751-4de4-9c76-9bac85fe0177-host-slash\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:56:58.253900 master-0 kubenswrapper[4119]: I0216 20:56:58.253528 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/695549c8-d1fc-429d-9c9f-0a5915dc6074-config\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:56:58.253900 master-0 kubenswrapper[4119]: I0216 20:56:58.253807 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b02b740-5698-4e9a-90fe-2873bd0b0958-config\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:56:58.254252 master-0 kubenswrapper[4119]: I0216 20:56:58.254118 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab0a907-7abe-4808-ba21-bdda1506eae2-config\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:56:58.254400 master-0 kubenswrapper[4119]: I0216 20:56:58.254342 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5e062e07-8076-444c-b476-4eb2848e9613-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:56:58.254400 master-0 kubenswrapper[4119]: I0216 20:56:58.254356 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:56:58.254698 master-0 kubenswrapper[4119]: I0216 20:56:58.254641 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d9d71a7a-a751-4de4-9c76-9bac85fe0177-iptables-alerter-script\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:56:58.254698 master-0 kubenswrapper[4119]: I0216 20:56:58.254691 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/695549c8-d1fc-429d-9c9f-0a5915dc6074-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:56:58.255240 master-0 kubenswrapper[4119]: I0216 20:56:58.255169 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cef33294-81fb-41a2-811d-2565f94514d1-trusted-ca\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.255753 master-0 kubenswrapper[4119]: I0216 20:56:58.255709 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59237aa6-6250-4619-8ee5-abae59f04b57-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:56:58.256291 master-0 kubenswrapper[4119]: I0216 20:56:58.256247 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab0a907-7abe-4808-ba21-bdda1506eae2-serving-cert\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:56:58.256370 master-0 kubenswrapper[4119]: I0216 20:56:58.256332 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e062e07-8076-444c-b476-4eb2848e9613-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:56:58.256656 master-0 kubenswrapper[4119]: I0216 20:56:58.256596 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b02b740-5698-4e9a-90fe-2873bd0b0958-serving-cert\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:56:58.257104 master-0 kubenswrapper[4119]: I0216 20:56:58.257056 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7adbe32-b8b9-438e-a2e3-f93146a97424-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:56:58.258610 master-0 kubenswrapper[4119]: I0216 20:56:58.258567 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:56:58.321500 master-0 kubenswrapper[4119]: I0216 20:56:58.321447 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkdzb\" (UniqueName: \"kubernetes.io/projected/d9d71a7a-a751-4de4-9c76-9bac85fe0177-kube-api-access-jkdzb\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:56:58.322274 master-0 kubenswrapper[4119]: I0216 20:56:58.322228 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bcmr\" (UniqueName: \"kubernetes.io/projected/695549c8-d1fc-429d-9c9f-0a5915dc6074-kube-api-access-7bcmr\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:56:58.322750 master-0 kubenswrapper[4119]: I0216 20:56:58.322714 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b02b740-5698-4e9a-90fe-2873bd0b0958-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:56:58.326670 master-0 kubenswrapper[4119]: I0216 20:56:58.325881 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfmv6\" (UniqueName: \"kubernetes.io/projected/5e062e07-8076-444c-b476-4eb2848e9613-kube-api-access-dfmv6\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:56:58.329076 master-0 kubenswrapper[4119]: I0216 20:56:58.328692 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7adbe32-b8b9-438e-a2e3-f93146a97424-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:56:58.331165 master-0 kubenswrapper[4119]: I0216 20:56:58.329784 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7nmb\" (UniqueName: \"kubernetes.io/projected/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-kube-api-access-g7nmb\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:56:58.331165 master-0 kubenswrapper[4119]: I0216 20:56:58.330865 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw9lp\" (UniqueName: \"kubernetes.io/projected/4085413c-9af1-4d2a-ba0f-33b42025cb7f-kube-api-access-dw9lp\") pod \"csi-snapshot-controller-operator-7b87b97578-v7xdv\" (UID: \"4085413c-9af1-4d2a-ba0f-33b42025cb7f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" Feb 16 20:56:58.334018 master-0 kubenswrapper[4119]: I0216 20:56:58.332675 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pw88\" (UniqueName: \"kubernetes.io/projected/2ab0a907-7abe-4808-ba21-bdda1506eae2-kube-api-access-9pw88\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:56:58.334018 master-0 kubenswrapper[4119]: I0216 20:56:58.333976 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-bound-sa-token\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.339534 master-0 kubenswrapper[4119]: I0216 20:56:58.339186 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:56:58.348668 master-0 kubenswrapper[4119]: I0216 20:56:58.348376 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vklwz\" (UniqueName: \"kubernetes.io/projected/59237aa6-6250-4619-8ee5-abae59f04b57-kube-api-access-vklwz\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:56:58.367026 master-0 kubenswrapper[4119]: I0216 20:56:58.366980 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:56:58.368502 master-0 kubenswrapper[4119]: I0216 20:56:58.368320 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tklr\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-kube-api-access-5tklr\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.382281 master-0 kubenswrapper[4119]: I0216 20:56:58.382208 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:56:58.391579 master-0 kubenswrapper[4119]: I0216 20:56:58.391337 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:56:58.394901 master-0 kubenswrapper[4119]: I0216 20:56:58.393953 4119 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67qzh\" (UniqueName: \"kubernetes.io/projected/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-kube-api-access-67qzh\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:56:58.400678 master-0 kubenswrapper[4119]: I0216 20:56:58.400345 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:56:58.430779 master-0 kubenswrapper[4119]: I0216 20:56:58.430699 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:56:58.434733 master-0 kubenswrapper[4119]: I0216 20:56:58.433681 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:56:58.457749 master-0 kubenswrapper[4119]: I0216 20:56:58.452194 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:56:58.481122 master-0 kubenswrapper[4119]: I0216 20:56:58.466824 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-8gnq5"] Feb 16 20:56:58.579787 master-0 kubenswrapper[4119]: I0216 20:56:58.579743 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" Feb 16 20:56:58.587811 master-0 kubenswrapper[4119]: I0216 20:56:58.585109 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:56:58.607335 master-0 kubenswrapper[4119]: I0216 20:56:58.600487 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:56:58.607335 master-0 kubenswrapper[4119]: I0216 20:56:58.606583 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:56:58.617269 master-0 kubenswrapper[4119]: I0216 20:56:58.616679 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8"] Feb 16 20:56:58.658902 master-0 kubenswrapper[4119]: I0216 20:56:58.658839 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p"] Feb 16 20:56:58.659399 master-0 kubenswrapper[4119]: I0216 20:56:58.659246 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:56:58.659399 master-0 kubenswrapper[4119]: I0216 20:56:58.659321 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.659494 master-0 kubenswrapper[4119]: I0216 20:56:58.659457 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:58.659693 master-0 kubenswrapper[4119]: I0216 20:56:58.659536 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:56:58.659693 master-0 kubenswrapper[4119]: E0216 20:56:58.659565 4119 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 20:56:58.659693 master-0 kubenswrapper[4119]: I0216 20:56:58.659579 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:56:58.659693 master-0 kubenswrapper[4119]: I0216 20:56:58.659613 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:56:58.659895 master-0 kubenswrapper[4119]: E0216 20:56:58.659859 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.659787227 +0000 UTC m=+115.589713265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "node-tuning-operator-tls" not found Feb 16 20:56:58.661118 master-0 kubenswrapper[4119]: I0216 20:56:58.661078 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld"] Feb 16 20:56:58.662147 master-0 kubenswrapper[4119]: E0216 20:56:58.662103 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 20:56:58.662434 master-0 kubenswrapper[4119]: E0216 20:56:58.662392 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert podName:2e618c5c-52be-4b52-b426-b92555dee9de nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.662362664 +0000 UTC m=+115.592288862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert") pod "catalog-operator-588944557d-h7xl6" (UID: "2e618c5c-52be-4b52-b426-b92555dee9de") : secret "catalog-operator-serving-cert" not found Feb 16 20:56:58.662520 master-0 kubenswrapper[4119]: E0216 20:56:58.662476 4119 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:56:58.662520 master-0 kubenswrapper[4119]: E0216 20:56:58.662517 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls podName:456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.662508157 +0000 UTC m=+115.592434375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls") pod "dns-operator-86b8869b79-cdltb" (UID: "456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd") : secret "metrics-tls" not found Feb 16 20:56:58.662636 master-0 kubenswrapper[4119]: E0216 20:56:58.662575 4119 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 20:56:58.662636 master-0 kubenswrapper[4119]: E0216 20:56:58.662601 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls podName:9e0227bc-63f5-48be-95dc-1323a2b2e327 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.662592899 +0000 UTC m=+115.592519137 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-4gczb" (UID: "9e0227bc-63f5-48be-95dc-1323a2b2e327") : secret "image-registry-operator-tls" not found Feb 16 20:56:58.662769 master-0 kubenswrapper[4119]: E0216 20:56:58.662664 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 20:56:58.662769 master-0 kubenswrapper[4119]: E0216 20:56:58.662688 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert podName:a4c9b781-14c0-469c-bb9e-0c3982a04520 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.662679381 +0000 UTC m=+115.592605399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert") pod "olm-operator-6b56bd877c-vlhvq" (UID: "a4c9b781-14c0-469c-bb9e-0c3982a04520") : secret "olm-operator-serving-cert" not found Feb 16 20:56:58.662769 master-0 kubenswrapper[4119]: E0216 20:56:58.661977 4119 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 20:56:58.662769 master-0 kubenswrapper[4119]: E0216 20:56:58.662735 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls podName:ec7dd4ea-a139-45d4-96a4-506da1567292 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.662726183 +0000 UTC m=+115.592652401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-w57zn" (UID: "ec7dd4ea-a139-45d4-96a4-506da1567292") : secret "cluster-monitoring-operator-tls" not found Feb 16 20:56:58.668702 master-0 kubenswrapper[4119]: W0216 20:56:58.668484 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7333319_3fe6_4b3f_b600_6b6df49fcaff.slice/crio-d84a6211eba3f66c2ce7e68ab1344f23f51a23b55442aa18fdabbc1b25bc9adb WatchSource:0}: Error finding container d84a6211eba3f66c2ce7e68ab1344f23f51a23b55442aa18fdabbc1b25bc9adb: Status 404 returned error can't find the container with id d84a6211eba3f66c2ce7e68ab1344f23f51a23b55442aa18fdabbc1b25bc9adb Feb 16 20:56:58.677558 master-0 kubenswrapper[4119]: W0216 20:56:58.677105 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b02b740_5698_4e9a_90fe_2873bd0b0958.slice/crio-6d07de2e0be321a3aec4da12f4f04e483d7ebf0407264e8a59f6674bcacef82d WatchSource:0}: Error finding container 6d07de2e0be321a3aec4da12f4f04e483d7ebf0407264e8a59f6674bcacef82d: Status 404 returned error can't find the container with id 6d07de2e0be321a3aec4da12f4f04e483d7ebf0407264e8a59f6674bcacef82d Feb 16 20:56:58.686623 master-0 kubenswrapper[4119]: I0216 20:56:58.686489 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz"] Feb 16 20:56:58.723679 master-0 kubenswrapper[4119]: I0216 20:56:58.721134 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft"] Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: I0216 20:56:58.763549 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: I0216 20:56:58.763639 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: E0216 20:56:58.763813 4119 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: E0216 20:56:58.763881 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics podName:b28234d1-1d9a-4d9f-9ad1-e3c682bed492 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.763859738 +0000 UTC m=+115.693785756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-6rmhq" (UID: "b28234d1-1d9a-4d9f-9ad1-e3c682bed492") : secret "marketplace-operator-metrics" not found Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: E0216 20:56:58.764252 4119 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: E0216 20:56:58.764280 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls podName:cef33294-81fb-41a2-811d-2565f94514d1 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.764269509 +0000 UTC m=+115.694195527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls") pod "ingress-operator-c588d8cb4-6ps2d" (UID: "cef33294-81fb-41a2-811d-2565f94514d1") : secret "metrics-tls" not found Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: I0216 20:56:58.764354 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: I0216 20:56:58.764393 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: I0216 20:56:58.764416 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: E0216 20:56:58.764533 4119 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: E0216 20:56:58.764556 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.764548596 +0000 UTC m=+115.694474614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : secret "multus-admission-controller-secret" not found Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: E0216 20:56:58.764597 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: E0216 20:56:58.764618 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert podName:4b035e85-b2b0-4dee-bb86-3465fc4b98a8 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.764611077 +0000 UTC m=+115.694537095 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-9m94g" (UID: "4b035e85-b2b0-4dee-bb86-3465fc4b98a8") : secret "package-server-manager-serving-cert" not found Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: E0216 20:56:58.764674 4119 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 20:56:58.767097 master-0 kubenswrapper[4119]: E0216 20:56:58.764695 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:56:59.764688429 +0000 UTC m=+115.694614447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "performance-addon-operator-webhook-cert" not found Feb 16 20:56:58.767804 master-0 kubenswrapper[4119]: I0216 20:56:58.765897 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9"] Feb 16 20:56:58.767804 master-0 kubenswrapper[4119]: I0216 20:56:58.765952 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw"] Feb 16 20:56:58.805680 master-0 kubenswrapper[4119]: W0216 20:56:58.805607 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod695549c8_d1fc_429d_9c9f_0a5915dc6074.slice/crio-b3fc27d6f88f12abb0f4db12508672dcd9584ab10707e7cd6f06dcebac1bbaa8 WatchSource:0}: Error finding container b3fc27d6f88f12abb0f4db12508672dcd9584ab10707e7cd6f06dcebac1bbaa8: Status 404 returned error can't find the container with id b3fc27d6f88f12abb0f4db12508672dcd9584ab10707e7cd6f06dcebac1bbaa8 Feb 16 20:56:58.844261 master-0 kubenswrapper[4119]: I0216 20:56:58.844205 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96"] Feb 16 20:56:58.861703 master-0 kubenswrapper[4119]: I0216 20:56:58.861631 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv"] Feb 16 20:56:58.877846 master-0 kubenswrapper[4119]: I0216 20:56:58.877799 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:56:58.884078 master-0 kubenswrapper[4119]: I0216 20:56:58.884053 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl"] Feb 16 20:56:58.884142 master-0 kubenswrapper[4119]: I0216 20:56:58.884083 4119 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4"] Feb 16 20:56:58.887692 master-0 kubenswrapper[4119]: W0216 20:56:58.886189 4119 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ab0a907_7abe_4808_ba21_bdda1506eae2.slice/crio-2dfa08dcecf95c49e6db650a7dbdf117c27ed644f23ff4e264133dd36a509d3c WatchSource:0}: Error finding container 2dfa08dcecf95c49e6db650a7dbdf117c27ed644f23ff4e264133dd36a509d3c: Status 404 returned error can't find the container with id 2dfa08dcecf95c49e6db650a7dbdf117c27ed644f23ff4e264133dd36a509d3c Feb 16 20:56:58.892891 master-0 kubenswrapper[4119]: I0216 20:56:58.892869 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 20:56:58.893573 master-0 kubenswrapper[4119]: I0216 20:56:58.893532 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" event={"ID":"70d217a9-86b7-47b9-a7da-9ac920b9c7c2","Type":"ContainerStarted","Data":"d1ce8d9ee7cab12610683fbe9731b9ea4f3d71878c552326acd5722dd5f1b61a"} Feb 16 20:56:58.894505 master-0 kubenswrapper[4119]: I0216 20:56:58.894469 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerStarted","Data":"4f2c49b4aa155e075775a0da6ce790eafb2a3d3e88c9dbca188493bbec98d810"} Feb 16 20:56:58.895697 master-0 kubenswrapper[4119]: I0216 20:56:58.895604 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" event={"ID":"4085413c-9af1-4d2a-ba0f-33b42025cb7f","Type":"ContainerStarted","Data":"c073f224d2a8cc60c80044d595d19260d941f19b426f78dc52e84033ff1afedc"} Feb 16 20:56:58.898735 master-0 kubenswrapper[4119]: I0216 20:56:58.898685 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-b68cj" event={"ID":"d9d71a7a-a751-4de4-9c76-9bac85fe0177","Type":"ContainerStarted","Data":"abcd1a63f33b879c154e1f80fc5ea3f4b46d9d1e7d2159b6ce5ac662b670e5ff"} Feb 16 20:56:58.899815 master-0 kubenswrapper[4119]: I0216 20:56:58.899776 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerStarted","Data":"b3fc27d6f88f12abb0f4db12508672dcd9584ab10707e7cd6f06dcebac1bbaa8"} Feb 16 20:56:58.901154 master-0 kubenswrapper[4119]: I0216 20:56:58.901126 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerStarted","Data":"6d07de2e0be321a3aec4da12f4f04e483d7ebf0407264e8a59f6674bcacef82d"} Feb 16 20:56:58.902077 master-0 kubenswrapper[4119]: I0216 20:56:58.902032 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerStarted","Data":"1e734464d78209c21a7a9eb2f6d22c8584997def010318f287f0cb7c28b7390b"} Feb 16 20:56:58.903512 master-0 kubenswrapper[4119]: I0216 20:56:58.903478 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerStarted","Data":"d84a6211eba3f66c2ce7e68ab1344f23f51a23b55442aa18fdabbc1b25bc9adb"} Feb 16 20:56:58.904790 master-0 kubenswrapper[4119]: I0216 20:56:58.904637 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" event={"ID":"e7adbe32-b8b9-438e-a2e3-f93146a97424","Type":"ContainerStarted","Data":"105b1eab12eec1f672058dc0900e8488b8bcca272b3ac3b2441b242d73128d7a"} Feb 16 20:56:58.905455 master-0 kubenswrapper[4119]: I0216 20:56:58.905424 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerStarted","Data":"4ff1d9141076f81759691d94a098009541c5d2c236ef8864f1522766d2980580"} Feb 16 20:56:58.906277 master-0 kubenswrapper[4119]: I0216 20:56:58.906243 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" event={"ID":"6b6be6de-6fcc-4f57-b163-fe8f970a01a4","Type":"ContainerStarted","Data":"75ca3e4fc5da353a0ea31c674632f3429b17eb41f067d771200d9b0aea75af5d"} Feb 16 20:56:58.907826 master-0 kubenswrapper[4119]: I0216 20:56:58.907788 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerStarted","Data":"db18d33d279edf734f31d955c318fccdcbf15241593b0786bf92a199ab2a428f"} Feb 16 20:56:58.913731 master-0 kubenswrapper[4119]: I0216 20:56:58.913697 4119 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 20:56:59.675093 master-0 kubenswrapper[4119]: I0216 20:56:59.675032 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:56:59.675354 master-0 kubenswrapper[4119]: E0216 20:56:59.675220 4119 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:56:59.675354 master-0 kubenswrapper[4119]: I0216 20:56:59.675232 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:56:59.675451 master-0 kubenswrapper[4119]: E0216 20:56:59.675317 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls podName:456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.675291453 +0000 UTC m=+117.605217541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls") pod "dns-operator-86b8869b79-cdltb" (UID: "456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd") : secret "metrics-tls" not found Feb 16 20:56:59.675451 master-0 kubenswrapper[4119]: I0216 20:56:59.675428 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:56:59.675536 master-0 kubenswrapper[4119]: E0216 20:56:59.675339 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 20:56:59.675536 master-0 kubenswrapper[4119]: E0216 20:56:59.675470 4119 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 20:56:59.675601 master-0 kubenswrapper[4119]: E0216 20:56:59.675539 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls podName:ec7dd4ea-a139-45d4-96a4-506da1567292 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.675520439 +0000 UTC m=+117.605446547 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-w57zn" (UID: "ec7dd4ea-a139-45d4-96a4-506da1567292") : secret "cluster-monitoring-operator-tls" not found Feb 16 20:56:59.675601 master-0 kubenswrapper[4119]: E0216 20:56:59.675558 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert podName:a4c9b781-14c0-469c-bb9e-0c3982a04520 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.67555029 +0000 UTC m=+117.605476418 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert") pod "olm-operator-6b56bd877c-vlhvq" (UID: "a4c9b781-14c0-469c-bb9e-0c3982a04520") : secret "olm-operator-serving-cert" not found Feb 16 20:56:59.675712 master-0 kubenswrapper[4119]: I0216 20:56:59.675600 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:56:59.675712 master-0 kubenswrapper[4119]: I0216 20:56:59.675634 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:59.675801 master-0 kubenswrapper[4119]: E0216 20:56:59.675780 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 20:56:59.675835 master-0 kubenswrapper[4119]: E0216 20:56:59.675827 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert podName:2e618c5c-52be-4b52-b426-b92555dee9de nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.675804426 +0000 UTC m=+117.605730444 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert") pod "catalog-operator-588944557d-h7xl6" (UID: "2e618c5c-52be-4b52-b426-b92555dee9de") : secret "catalog-operator-serving-cert" not found Feb 16 20:56:59.675875 master-0 kubenswrapper[4119]: I0216 20:56:59.675856 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:56:59.675979 master-0 kubenswrapper[4119]: E0216 20:56:59.675870 4119 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 20:56:59.675979 master-0 kubenswrapper[4119]: E0216 20:56:59.675903 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.675894759 +0000 UTC m=+117.605820777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "node-tuning-operator-tls" not found Feb 16 20:56:59.675979 master-0 kubenswrapper[4119]: E0216 20:56:59.675944 4119 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 20:56:59.675979 master-0 kubenswrapper[4119]: E0216 20:56:59.675969 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls podName:9e0227bc-63f5-48be-95dc-1323a2b2e327 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.67596274 +0000 UTC m=+117.605888758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-4gczb" (UID: "9e0227bc-63f5-48be-95dc-1323a2b2e327") : secret "image-registry-operator-tls" not found Feb 16 20:56:59.777171 master-0 kubenswrapper[4119]: I0216 20:56:59.776925 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:56:59.777171 master-0 kubenswrapper[4119]: I0216 20:56:59.777016 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:56:59.777171 master-0 kubenswrapper[4119]: I0216 20:56:59.777084 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:56:59.777171 master-0 kubenswrapper[4119]: I0216 20:56:59.777125 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:56:59.777171 master-0 kubenswrapper[4119]: I0216 20:56:59.777154 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:56:59.777475 master-0 kubenswrapper[4119]: E0216 20:56:59.777372 4119 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 20:56:59.777475 master-0 kubenswrapper[4119]: E0216 20:56:59.777395 4119 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 20:56:59.777475 master-0 kubenswrapper[4119]: E0216 20:56:59.777442 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics podName:b28234d1-1d9a-4d9f-9ad1-e3c682bed492 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.777416334 +0000 UTC m=+117.707342352 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-6rmhq" (UID: "b28234d1-1d9a-4d9f-9ad1-e3c682bed492") : secret "marketplace-operator-metrics" not found Feb 16 20:56:59.777556 master-0 kubenswrapper[4119]: E0216 20:56:59.777499 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.777471785 +0000 UTC m=+117.707397873 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : secret "multus-admission-controller-secret" not found Feb 16 20:56:59.777586 master-0 kubenswrapper[4119]: E0216 20:56:59.777563 4119 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 20:56:59.777641 master-0 kubenswrapper[4119]: E0216 20:56:59.777588 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 20:56:59.777697 master-0 kubenswrapper[4119]: E0216 20:56:59.777672 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert podName:4b035e85-b2b0-4dee-bb86-3465fc4b98a8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.77764491 +0000 UTC m=+117.707570928 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-9m94g" (UID: "4b035e85-b2b0-4dee-bb86-3465fc4b98a8") : secret "package-server-manager-serving-cert" not found Feb 16 20:56:59.777697 master-0 kubenswrapper[4119]: E0216 20:56:59.777690 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.777682431 +0000 UTC m=+117.707608559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "performance-addon-operator-webhook-cert" not found Feb 16 20:56:59.777754 master-0 kubenswrapper[4119]: E0216 20:56:59.777714 4119 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:56:59.777754 master-0 kubenswrapper[4119]: E0216 20:56:59.777751 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls podName:cef33294-81fb-41a2-811d-2565f94514d1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:01.777730502 +0000 UTC m=+117.707656520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls") pod "ingress-operator-c588d8cb4-6ps2d" (UID: "cef33294-81fb-41a2-811d-2565f94514d1") : secret "metrics-tls" not found Feb 16 20:56:59.874743 master-0 kubenswrapper[4119]: I0216 20:56:59.874687 4119 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:56:59.877266 master-0 kubenswrapper[4119]: I0216 20:56:59.877212 4119 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 20:56:59.945396 master-0 kubenswrapper[4119]: I0216 20:56:59.945249 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" event={"ID":"2ab0a907-7abe-4808-ba21-bdda1506eae2","Type":"ContainerStarted","Data":"2dfa08dcecf95c49e6db650a7dbdf117c27ed644f23ff4e264133dd36a509d3c"} Feb 16 20:56:59.949194 master-0 kubenswrapper[4119]: I0216 20:56:59.949158 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerStarted","Data":"6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9"} Feb 16 20:57:00.007679 master-0 kubenswrapper[4119]: I0216 20:57:00.007321 4119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" podStartSLOduration=81.007294635 podStartE2EDuration="1m21.007294635s" podCreationTimestamp="2026-02-16 20:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:00.006137025 +0000 UTC m=+115.936063043" watchObservedRunningTime="2026-02-16 20:57:00.007294635 +0000 UTC m=+115.937220653" Feb 16 20:57:00.954023 master-0 kubenswrapper[4119]: I0216 20:57:00.953953 4119 generic.go:334] "Generic (PLEG): container finished" podID="59237aa6-6250-4619-8ee5-abae59f04b57" containerID="61defc533791601dd8ff505e57b675aac367c1fe0144fefa77509ab84c3b3331" exitCode=0 Feb 16 20:57:00.954595 master-0 kubenswrapper[4119]: I0216 20:57:00.954037 4119 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerDied","Data":"61defc533791601dd8ff505e57b675aac367c1fe0144fefa77509ab84c3b3331"} Feb 16 20:57:01.702065 master-0 kubenswrapper[4119]: I0216 20:57:01.701558 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:01.702065 master-0 kubenswrapper[4119]: I0216 20:57:01.702042 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:01.702632 master-0 kubenswrapper[4119]: E0216 20:57:01.701850 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 20:57:01.702632 master-0 kubenswrapper[4119]: I0216 20:57:01.702152 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:01.702632 master-0 kubenswrapper[4119]: I0216 20:57:01.702195 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:01.702632 master-0 kubenswrapper[4119]: E0216 20:57:01.702227 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert podName:a4c9b781-14c0-469c-bb9e-0c3982a04520 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.702191578 +0000 UTC m=+121.632117826 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert") pod "olm-operator-6b56bd877c-vlhvq" (UID: "a4c9b781-14c0-469c-bb9e-0c3982a04520") : secret "olm-operator-serving-cert" not found Feb 16 20:57:01.702632 master-0 kubenswrapper[4119]: E0216 20:57:01.702255 4119 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:01.702632 master-0 kubenswrapper[4119]: I0216 20:57:01.702282 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:01.702632 master-0 kubenswrapper[4119]: E0216 20:57:01.702306 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls podName:ec7dd4ea-a139-45d4-96a4-506da1567292 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.702292851 +0000 UTC m=+121.632218869 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-w57zn" (UID: "ec7dd4ea-a139-45d4-96a4-506da1567292") : secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:01.702632 master-0 kubenswrapper[4119]: E0216 20:57:01.702370 4119 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 20:57:01.702632 master-0 kubenswrapper[4119]: E0216 20:57:01.702641 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.702422194 +0000 UTC m=+121.632348412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "node-tuning-operator-tls" not found Feb 16 20:57:01.703267 master-0 kubenswrapper[4119]: E0216 20:57:01.702703 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 20:57:01.703267 master-0 kubenswrapper[4119]: E0216 20:57:01.702756 4119 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 20:57:01.703267 master-0 kubenswrapper[4119]: I0216 20:57:01.702794 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:01.703267 master-0 kubenswrapper[4119]: E0216 20:57:01.702836 4119 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:01.703267 master-0 kubenswrapper[4119]: E0216 20:57:01.702879 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls podName:9e0227bc-63f5-48be-95dc-1323a2b2e327 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.702850476 +0000 UTC m=+121.632776674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-4gczb" (UID: "9e0227bc-63f5-48be-95dc-1323a2b2e327") : secret "image-registry-operator-tls" not found Feb 16 20:57:01.703267 master-0 kubenswrapper[4119]: E0216 20:57:01.702996 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert podName:2e618c5c-52be-4b52-b426-b92555dee9de nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.702983119 +0000 UTC m=+121.632909327 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert") pod "catalog-operator-588944557d-h7xl6" (UID: "2e618c5c-52be-4b52-b426-b92555dee9de") : secret "catalog-operator-serving-cert" not found Feb 16 20:57:01.703267 master-0 kubenswrapper[4119]: E0216 20:57:01.703021 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls podName:456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.703013 +0000 UTC m=+121.632939258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls") pod "dns-operator-86b8869b79-cdltb" (UID: "456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd") : secret "metrics-tls" not found Feb 16 20:57:01.803795 master-0 kubenswrapper[4119]: I0216 20:57:01.803723 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:01.803795 master-0 kubenswrapper[4119]: I0216 20:57:01.803795 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:01.804044 master-0 kubenswrapper[4119]: I0216 20:57:01.803979 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:01.804220 master-0 kubenswrapper[4119]: I0216 20:57:01.804188 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:01.804276 master-0 kubenswrapper[4119]: E0216 20:57:01.804187 4119 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 20:57:01.804276 master-0 kubenswrapper[4119]: I0216 20:57:01.804238 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:01.804348 master-0 kubenswrapper[4119]: E0216 20:57:01.804291 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 20:57:01.804348 master-0 kubenswrapper[4119]: E0216 20:57:01.804216 4119 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:01.804434 master-0 kubenswrapper[4119]: E0216 20:57:01.804363 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics podName:b28234d1-1d9a-4d9f-9ad1-e3c682bed492 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.804324179 +0000 UTC m=+121.734250357 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-6rmhq" (UID: "b28234d1-1d9a-4d9f-9ad1-e3c682bed492") : secret "marketplace-operator-metrics" not found Feb 16 20:57:01.804434 master-0 kubenswrapper[4119]: E0216 20:57:01.804401 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert podName:4b035e85-b2b0-4dee-bb86-3465fc4b98a8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.804386411 +0000 UTC m=+121.734312639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-9m94g" (UID: "4b035e85-b2b0-4dee-bb86-3465fc4b98a8") : secret "package-server-manager-serving-cert" not found Feb 16 20:57:01.804529 master-0 kubenswrapper[4119]: E0216 20:57:01.804429 4119 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:01.804567 master-0 kubenswrapper[4119]: E0216 20:57:01.804451 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls podName:cef33294-81fb-41a2-811d-2565f94514d1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.804414741 +0000 UTC m=+121.734340759 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls") pod "ingress-operator-c588d8cb4-6ps2d" (UID: "cef33294-81fb-41a2-811d-2565f94514d1") : secret "metrics-tls" not found Feb 16 20:57:01.804567 master-0 kubenswrapper[4119]: E0216 20:57:01.804548 4119 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 20:57:01.804685 master-0 kubenswrapper[4119]: E0216 20:57:01.804607 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.804590566 +0000 UTC m=+121.734516764 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : secret "multus-admission-controller-secret" not found Feb 16 20:57:01.804685 master-0 kubenswrapper[4119]: E0216 20:57:01.804675 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:05.804628497 +0000 UTC m=+121.734554645 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:05.745116 master-0 kubenswrapper[4119]: I0216 20:57:05.745052 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:05.745116 master-0 kubenswrapper[4119]: I0216 20:57:05.745101 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: I0216 20:57:05.745159 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: E0216 20:57:05.745320 4119 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: E0216 20:57:05.745401 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls podName:ec7dd4ea-a139-45d4-96a4-506da1567292 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.745380677 +0000 UTC m=+129.675306695 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-w57zn" (UID: "ec7dd4ea-a139-45d4-96a4-506da1567292") : secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: E0216 20:57:05.745413 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: E0216 20:57:05.745506 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert podName:a4c9b781-14c0-469c-bb9e-0c3982a04520 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.745482499 +0000 UTC m=+129.675408577 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert") pod "olm-operator-6b56bd877c-vlhvq" (UID: "a4c9b781-14c0-469c-bb9e-0c3982a04520") : secret "olm-operator-serving-cert" not found Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: E0216 20:57:05.745505 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: I0216 20:57:05.745525 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: I0216 20:57:05.745561 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: E0216 20:57:05.745580 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert podName:2e618c5c-52be-4b52-b426-b92555dee9de nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.745560101 +0000 UTC m=+129.675486169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert") pod "catalog-operator-588944557d-h7xl6" (UID: "2e618c5c-52be-4b52-b426-b92555dee9de") : secret "catalog-operator-serving-cert" not found Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: E0216 20:57:05.745783 4119 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: E0216 20:57:05.745871 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.745850818 +0000 UTC m=+129.675776836 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "node-tuning-operator-tls" not found Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: E0216 20:57:05.745887 4119 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: E0216 20:57:05.745920 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls podName:9e0227bc-63f5-48be-95dc-1323a2b2e327 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.74591125 +0000 UTC m=+129.675837268 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-4gczb" (UID: "9e0227bc-63f5-48be-95dc-1323a2b2e327") : secret "image-registry-operator-tls" not found Feb 16 20:57:05.745985 master-0 kubenswrapper[4119]: I0216 20:57:05.745952 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:05.746590 master-0 kubenswrapper[4119]: E0216 20:57:05.746064 4119 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:05.746590 master-0 kubenswrapper[4119]: E0216 20:57:05.746090 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls podName:456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.746082084 +0000 UTC m=+129.676008102 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls") pod "dns-operator-86b8869b79-cdltb" (UID: "456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd") : secret "metrics-tls" not found Feb 16 20:57:05.846876 master-0 kubenswrapper[4119]: I0216 20:57:05.846806 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:05.846965 master-0 kubenswrapper[4119]: I0216 20:57:05.846882 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:05.847046 master-0 kubenswrapper[4119]: E0216 20:57:05.847011 4119 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 20:57:05.847102 master-0 kubenswrapper[4119]: E0216 20:57:05.847095 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics podName:b28234d1-1d9a-4d9f-9ad1-e3c682bed492 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.847072996 +0000 UTC m=+129.776999014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-6rmhq" (UID: "b28234d1-1d9a-4d9f-9ad1-e3c682bed492") : secret "marketplace-operator-metrics" not found Feb 16 20:57:05.847242 master-0 kubenswrapper[4119]: I0216 20:57:05.847213 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:05.847295 master-0 kubenswrapper[4119]: E0216 20:57:05.847231 4119 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:05.847371 master-0 kubenswrapper[4119]: E0216 20:57:05.847346 4119 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 20:57:05.847371 master-0 kubenswrapper[4119]: I0216 20:57:05.847271 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:05.847506 master-0 kubenswrapper[4119]: E0216 20:57:05.847373 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls podName:cef33294-81fb-41a2-811d-2565f94514d1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.847336583 +0000 UTC m=+129.777262641 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls") pod "ingress-operator-c588d8cb4-6ps2d" (UID: "cef33294-81fb-41a2-811d-2565f94514d1") : secret "metrics-tls" not found Feb 16 20:57:05.847506 master-0 kubenswrapper[4119]: I0216 20:57:05.847402 4119 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:05.847506 master-0 kubenswrapper[4119]: E0216 20:57:05.847412 4119 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:05.847506 master-0 kubenswrapper[4119]: E0216 20:57:05.847430 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert podName:4b035e85-b2b0-4dee-bb86-3465fc4b98a8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.847422985 +0000 UTC m=+129.777349003 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-9m94g" (UID: "4b035e85-b2b0-4dee-bb86-3465fc4b98a8") : secret "package-server-manager-serving-cert" not found Feb 16 20:57:05.847506 master-0 kubenswrapper[4119]: E0216 20:57:05.847447 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.847435795 +0000 UTC m=+129.777361933 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:05.847506 master-0 kubenswrapper[4119]: E0216 20:57:05.847480 4119 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 20:57:05.847771 master-0 kubenswrapper[4119]: E0216 20:57:05.847519 4119 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.847507187 +0000 UTC m=+129.777433305 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : secret "multus-admission-controller-secret" not found Feb 16 20:57:08.232710 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 20:57:08.308687 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 20:57:08.309010 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 20:57:08.310400 master-0 systemd[1]: kubelet.service: Consumed 10.360s CPU time. Feb 16 20:57:08.314696 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 20:57:08.398684 master-0 kubenswrapper[7926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:57:08.398684 master-0 kubenswrapper[7926]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 20:57:08.398684 master-0 kubenswrapper[7926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:57:08.398684 master-0 kubenswrapper[7926]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:57:08.398684 master-0 kubenswrapper[7926]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 20:57:08.398684 master-0 kubenswrapper[7926]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 20:57:08.400194 master-0 kubenswrapper[7926]: I0216 20:57:08.398763 7926 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 20:57:08.401098 master-0 kubenswrapper[7926]: W0216 20:57:08.401077 7926 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:57:08.401098 master-0 kubenswrapper[7926]: W0216 20:57:08.401092 7926 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:57:08.401098 master-0 kubenswrapper[7926]: W0216 20:57:08.401097 7926 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401102 7926 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401107 7926 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401111 7926 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401115 7926 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401119 7926 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401123 7926 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401127 7926 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401131 7926 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401135 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401138 7926 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401141 7926 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401145 7926 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401148 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401152 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401156 7926 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401159 7926 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401163 7926 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401166 7926 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401170 7926 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:57:08.401186 master-0 kubenswrapper[7926]: W0216 20:57:08.401175 7926 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401180 7926 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401184 7926 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401188 7926 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401192 7926 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401195 7926 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401199 7926 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401202 7926 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401207 7926 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401211 7926 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401214 7926 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401218 7926 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401222 7926 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401226 7926 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401229 7926 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401233 7926 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401236 7926 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401239 7926 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401243 7926 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401246 7926 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:57:08.401742 master-0 kubenswrapper[7926]: W0216 20:57:08.401250 7926 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401253 7926 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401256 7926 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401260 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401263 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401267 7926 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401270 7926 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401273 7926 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401277 7926 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401281 7926 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401285 7926 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401289 7926 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401294 7926 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401299 7926 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401303 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401307 7926 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401311 7926 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401314 7926 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401318 7926 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401321 7926 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:57:08.402208 master-0 kubenswrapper[7926]: W0216 20:57:08.401325 7926 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: W0216 20:57:08.401330 7926 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: W0216 20:57:08.401334 7926 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: W0216 20:57:08.401340 7926 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: W0216 20:57:08.401345 7926 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: W0216 20:57:08.401350 7926 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: W0216 20:57:08.401355 7926 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: W0216 20:57:08.401359 7926 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: W0216 20:57:08.401364 7926 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: W0216 20:57:08.401369 7926 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401458 7926 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401466 7926 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401473 7926 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401478 7926 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401484 7926 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401489 7926 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401495 7926 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401500 7926 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401505 7926 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401509 7926 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401514 7926 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 20:57:08.402695 master-0 kubenswrapper[7926]: I0216 20:57:08.401518 7926 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401523 7926 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401527 7926 flags.go:64] FLAG: --cgroup-root="" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401532 7926 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401536 7926 flags.go:64] FLAG: --client-ca-file="" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401542 7926 flags.go:64] FLAG: --cloud-config="" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401547 7926 flags.go:64] FLAG: --cloud-provider="" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401551 7926 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401556 7926 flags.go:64] FLAG: --cluster-domain="" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401560 7926 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401564 7926 flags.go:64] FLAG: --config-dir="" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401568 7926 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401573 7926 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401578 7926 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401582 7926 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401586 7926 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401591 7926 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401595 7926 flags.go:64] FLAG: --contention-profiling="false" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401599 7926 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401604 7926 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401608 7926 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401613 7926 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401617 7926 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401623 7926 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401627 7926 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 20:57:08.403237 master-0 kubenswrapper[7926]: I0216 20:57:08.401631 7926 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401635 7926 flags.go:64] FLAG: --enable-server="true" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401640 7926 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401647 7926 flags.go:64] FLAG: --event-burst="100" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401652 7926 flags.go:64] FLAG: --event-qps="50" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401656 7926 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401676 7926 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401681 7926 flags.go:64] FLAG: --eviction-hard="" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401687 7926 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401691 7926 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401696 7926 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401700 7926 flags.go:64] FLAG: --eviction-soft="" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401705 7926 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401709 7926 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401713 7926 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401717 7926 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401721 7926 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401725 7926 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401729 7926 flags.go:64] FLAG: --feature-gates="" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401734 7926 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401738 7926 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401742 7926 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401747 7926 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401751 7926 flags.go:64] FLAG: --healthz-port="10248" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401755 7926 flags.go:64] FLAG: --help="false" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401759 7926 flags.go:64] FLAG: --hostname-override="" Feb 16 20:57:08.403809 master-0 kubenswrapper[7926]: I0216 20:57:08.401763 7926 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401767 7926 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401771 7926 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401775 7926 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401778 7926 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401782 7926 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401786 7926 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401791 7926 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401795 7926 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401800 7926 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401804 7926 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401808 7926 flags.go:64] FLAG: --kube-reserved="" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401812 7926 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401816 7926 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401820 7926 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401824 7926 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401828 7926 flags.go:64] FLAG: --lock-file="" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401833 7926 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401837 7926 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401841 7926 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401847 7926 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401851 7926 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401855 7926 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401859 7926 flags.go:64] FLAG: --logging-format="text" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401863 7926 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 20:57:08.404600 master-0 kubenswrapper[7926]: I0216 20:57:08.401867 7926 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401871 7926 flags.go:64] FLAG: --manifest-url="" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401875 7926 flags.go:64] FLAG: --manifest-url-header="" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401881 7926 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401885 7926 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401890 7926 flags.go:64] FLAG: --max-pods="110" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401894 7926 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401898 7926 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401902 7926 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401906 7926 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401910 7926 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401913 7926 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401917 7926 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401926 7926 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401930 7926 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401934 7926 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401939 7926 flags.go:64] FLAG: --pod-cidr="" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401943 7926 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401949 7926 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401956 7926 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401960 7926 flags.go:64] FLAG: --pods-per-core="0" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401964 7926 flags.go:64] FLAG: --port="10250" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401969 7926 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401981 7926 flags.go:64] FLAG: --provider-id="" Feb 16 20:57:08.405259 master-0 kubenswrapper[7926]: I0216 20:57:08.401985 7926 flags.go:64] FLAG: --qos-reserved="" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.401989 7926 flags.go:64] FLAG: --read-only-port="10255" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.401993 7926 flags.go:64] FLAG: --register-node="true" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.401997 7926 flags.go:64] FLAG: --register-schedulable="true" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402002 7926 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402009 7926 flags.go:64] FLAG: --registry-burst="10" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402013 7926 flags.go:64] FLAG: --registry-qps="5" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402017 7926 flags.go:64] FLAG: --reserved-cpus="" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402021 7926 flags.go:64] FLAG: --reserved-memory="" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402026 7926 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402030 7926 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402034 7926 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402038 7926 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402042 7926 flags.go:64] FLAG: --runonce="false" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402046 7926 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402050 7926 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402054 7926 flags.go:64] FLAG: --seccomp-default="false" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402058 7926 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402062 7926 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402066 7926 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402070 7926 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402074 7926 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402078 7926 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402082 7926 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402086 7926 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 20:57:08.405877 master-0 kubenswrapper[7926]: I0216 20:57:08.402090 7926 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402095 7926 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402099 7926 flags.go:64] FLAG: --system-cgroups="" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402103 7926 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402110 7926 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402114 7926 flags.go:64] FLAG: --tls-cert-file="" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402119 7926 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402124 7926 flags.go:64] FLAG: --tls-min-version="" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402128 7926 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402132 7926 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402135 7926 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402140 7926 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402144 7926 flags.go:64] FLAG: --v="2" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402149 7926 flags.go:64] FLAG: --version="false" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402154 7926 flags.go:64] FLAG: --vmodule="" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402159 7926 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: I0216 20:57:08.402163 7926 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: W0216 20:57:08.402269 7926 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: W0216 20:57:08.402274 7926 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: W0216 20:57:08.402278 7926 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: W0216 20:57:08.402282 7926 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: W0216 20:57:08.402286 7926 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: W0216 20:57:08.402290 7926 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: W0216 20:57:08.402294 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:57:08.406943 master-0 kubenswrapper[7926]: W0216 20:57:08.402298 7926 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402302 7926 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402307 7926 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402310 7926 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402314 7926 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402317 7926 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402321 7926 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402325 7926 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402329 7926 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402332 7926 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402336 7926 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402339 7926 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402343 7926 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402347 7926 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402350 7926 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402355 7926 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402359 7926 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402362 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402367 7926 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:57:08.407488 master-0 kubenswrapper[7926]: W0216 20:57:08.402371 7926 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402375 7926 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402380 7926 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402384 7926 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402388 7926 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402391 7926 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402395 7926 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402398 7926 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402401 7926 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402405 7926 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402409 7926 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402412 7926 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402416 7926 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402420 7926 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402424 7926 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402428 7926 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402432 7926 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402436 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402439 7926 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:57:08.407936 master-0 kubenswrapper[7926]: W0216 20:57:08.402443 7926 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402446 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402451 7926 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402454 7926 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402458 7926 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402462 7926 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402465 7926 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402469 7926 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402473 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402476 7926 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402480 7926 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402483 7926 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402487 7926 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402490 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402494 7926 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402497 7926 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402502 7926 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402506 7926 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402510 7926 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402514 7926 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:57:08.408461 master-0 kubenswrapper[7926]: W0216 20:57:08.402518 7926 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:57:08.408986 master-0 kubenswrapper[7926]: W0216 20:57:08.402521 7926 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:57:08.408986 master-0 kubenswrapper[7926]: W0216 20:57:08.402526 7926 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:57:08.408986 master-0 kubenswrapper[7926]: W0216 20:57:08.402530 7926 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:57:08.408986 master-0 kubenswrapper[7926]: W0216 20:57:08.402534 7926 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:57:08.408986 master-0 kubenswrapper[7926]: W0216 20:57:08.402538 7926 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:57:08.408986 master-0 kubenswrapper[7926]: W0216 20:57:08.402541 7926 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:57:08.408986 master-0 kubenswrapper[7926]: I0216 20:57:08.402553 7926 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:57:08.409280 master-0 kubenswrapper[7926]: I0216 20:57:08.409227 7926 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 20:57:08.409280 master-0 kubenswrapper[7926]: I0216 20:57:08.409271 7926 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 20:57:08.409410 master-0 kubenswrapper[7926]: W0216 20:57:08.409387 7926 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:57:08.409410 master-0 kubenswrapper[7926]: W0216 20:57:08.409400 7926 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:57:08.409410 master-0 kubenswrapper[7926]: W0216 20:57:08.409405 7926 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:57:08.409410 master-0 kubenswrapper[7926]: W0216 20:57:08.409410 7926 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:57:08.409410 master-0 kubenswrapper[7926]: W0216 20:57:08.409415 7926 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409419 7926 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409424 7926 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409428 7926 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409433 7926 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409437 7926 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409440 7926 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409444 7926 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409448 7926 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409452 7926 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409455 7926 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409459 7926 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409464 7926 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409471 7926 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409475 7926 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409479 7926 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409483 7926 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409486 7926 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409491 7926 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409495 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:57:08.409528 master-0 kubenswrapper[7926]: W0216 20:57:08.409499 7926 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409502 7926 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409506 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409509 7926 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409514 7926 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409518 7926 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409522 7926 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409528 7926 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409533 7926 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409538 7926 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409544 7926 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409549 7926 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409555 7926 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409560 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409565 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409571 7926 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409577 7926 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409582 7926 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409586 7926 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409591 7926 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:57:08.410007 master-0 kubenswrapper[7926]: W0216 20:57:08.409596 7926 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409600 7926 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409606 7926 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409611 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409616 7926 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409622 7926 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409627 7926 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409632 7926 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409637 7926 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409641 7926 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409652 7926 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409657 7926 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409685 7926 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409691 7926 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409696 7926 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409701 7926 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409705 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409710 7926 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409716 7926 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:57:08.410464 master-0 kubenswrapper[7926]: W0216 20:57:08.409720 7926 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409724 7926 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409728 7926 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409731 7926 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409735 7926 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409740 7926 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409743 7926 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409749 7926 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409753 7926 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: I0216 20:57:08.409759 7926 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409888 7926 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409896 7926 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409900 7926 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409904 7926 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409908 7926 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 20:57:08.411042 master-0 kubenswrapper[7926]: W0216 20:57:08.409912 7926 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409917 7926 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409924 7926 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409928 7926 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409933 7926 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409937 7926 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409941 7926 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409944 7926 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409948 7926 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409952 7926 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409956 7926 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409959 7926 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409963 7926 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409966 7926 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409969 7926 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409973 7926 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409976 7926 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409981 7926 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409985 7926 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 20:57:08.411413 master-0 kubenswrapper[7926]: W0216 20:57:08.409990 7926 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.409994 7926 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.409998 7926 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410001 7926 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410005 7926 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410009 7926 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410014 7926 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410019 7926 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410023 7926 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410027 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410032 7926 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410037 7926 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410042 7926 feature_gate.go:330] unrecognized feature gate: Example Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410047 7926 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410054 7926 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410060 7926 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410066 7926 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410071 7926 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410076 7926 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410081 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 20:57:08.411869 master-0 kubenswrapper[7926]: W0216 20:57:08.410087 7926 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410091 7926 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410096 7926 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410100 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410104 7926 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410109 7926 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410114 7926 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410119 7926 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410124 7926 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410128 7926 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410133 7926 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410138 7926 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410143 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410148 7926 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410153 7926 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410158 7926 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410163 7926 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410168 7926 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410172 7926 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410176 7926 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 20:57:08.412409 master-0 kubenswrapper[7926]: W0216 20:57:08.410180 7926 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: W0216 20:57:08.410184 7926 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: W0216 20:57:08.410188 7926 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: W0216 20:57:08.410192 7926 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: W0216 20:57:08.410196 7926 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: W0216 20:57:08.410200 7926 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: W0216 20:57:08.410205 7926 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: W0216 20:57:08.410209 7926 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: I0216 20:57:08.410216 7926 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: I0216 20:57:08.410457 7926 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: I0216 20:57:08.412497 7926 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: I0216 20:57:08.412592 7926 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: I0216 20:57:08.412840 7926 server.go:997] "Starting client certificate rotation" Feb 16 20:57:08.412888 master-0 kubenswrapper[7926]: I0216 20:57:08.412848 7926 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 20:57:08.413206 master-0 kubenswrapper[7926]: I0216 20:57:08.413008 7926 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 20:47:22 +0000 UTC, rotation deadline is 2026-02-17 17:21:46.761615758 +0000 UTC Feb 16 20:57:08.413206 master-0 kubenswrapper[7926]: I0216 20:57:08.413063 7926 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h24m38.348554582s for next certificate rotation Feb 16 20:57:08.413527 master-0 kubenswrapper[7926]: I0216 20:57:08.413498 7926 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 20:57:08.414915 master-0 kubenswrapper[7926]: I0216 20:57:08.414890 7926 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 20:57:08.417885 master-0 kubenswrapper[7926]: I0216 20:57:08.417855 7926 log.go:25] "Validated CRI v1 runtime API" Feb 16 20:57:08.420694 master-0 kubenswrapper[7926]: I0216 20:57:08.420667 7926 log.go:25] "Validated CRI v1 image API" Feb 16 20:57:08.421905 master-0 kubenswrapper[7926]: I0216 20:57:08.421866 7926 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 20:57:08.425400 master-0 kubenswrapper[7926]: I0216 20:57:08.425360 7926 fs.go:135] Filesystem UUIDs: map[3d9a04b0-92fb-4350-a5ea-d38e1e45e06e:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 20:57:08.426206 master-0 kubenswrapper[7926]: I0216 20:57:08.425391 7926 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0dfbee9f7528fe042540e180164336ecf2ece621fbebd18d9dde03c5a49a8d3a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0dfbee9f7528fe042540e180164336ecf2ece621fbebd18d9dde03c5a49a8d3a/userdata/shm major:0 minor:126 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/105b1eab12eec1f672058dc0900e8488b8bcca272b3ac3b2441b242d73128d7a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/105b1eab12eec1f672058dc0900e8488b8bcca272b3ac3b2441b242d73128d7a/userdata/shm major:0 minor:282 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1e734464d78209c21a7a9eb2f6d22c8584997def010318f287f0cb7c28b7390b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1e734464d78209c21a7a9eb2f6d22c8584997def010318f287f0cb7c28b7390b/userdata/shm major:0 minor:303 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2dfa08dcecf95c49e6db650a7dbdf117c27ed644f23ff4e264133dd36a509d3c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2dfa08dcecf95c49e6db650a7dbdf117c27ed644f23ff4e264133dd36a509d3c/userdata/shm major:0 minor:305 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/392d856d7fe28dd19573efbe9000d6ecfa05d7a1577bf8dec97ef5ca7366c7d8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/392d856d7fe28dd19573efbe9000d6ecfa05d7a1577bf8dec97ef5ca7366c7d8/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4f2c49b4aa155e075775a0da6ce790eafb2a3d3e88c9dbca188493bbec98d810/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4f2c49b4aa155e075775a0da6ce790eafb2a3d3e88c9dbca188493bbec98d810/userdata/shm major:0 minor:300 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4ff1d9141076f81759691d94a098009541c5d2c236ef8864f1522766d2980580/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4ff1d9141076f81759691d94a098009541c5d2c236ef8864f1522766d2980580/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6d07de2e0be321a3aec4da12f4f04e483d7ebf0407264e8a59f6674bcacef82d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6d07de2e0be321a3aec4da12f4f04e483d7ebf0407264e8a59f6674bcacef82d/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/75ca3e4fc5da353a0ea31c674632f3429b17eb41f067d771200d9b0aea75af5d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/75ca3e4fc5da353a0ea31c674632f3429b17eb41f067d771200d9b0aea75af5d/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/76dbaddee4470107b39590128f61476392182af8f7359d5ef8d2efc6c99ae59e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/76dbaddee4470107b39590128f61476392182af8f7359d5ef8d2efc6c99ae59e/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/76e543cc5345eb5c53417c9f0b565400b03593c03aa3a1637483c029bb868ef3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/76e543cc5345eb5c53417c9f0b565400b03593c03aa3a1637483c029bb868ef3/userdata/shm major:0 minor:166 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/91a4c15bb67084035c73bb065892be1c9d73ba9204c94c99f7433a6c3008aaff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/91a4c15bb67084035c73bb065892be1c9d73ba9204c94c99f7433a6c3008aaff/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/957c111d10e2d292281a50f8cc278f441c1f3165b491de07cd91b63ab4d96530/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/957c111d10e2d292281a50f8cc278f441c1f3165b491de07cd91b63ab4d96530/userdata/shm major:0 minor:112 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9e9fb9a8fc61dba0936cd38d7b843d3efbdecc6ba9ec73f7423569f6305a4740/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9e9fb9a8fc61dba0936cd38d7b843d3efbdecc6ba9ec73f7423569f6305a4740/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/abcd1a63f33b879c154e1f80fc5ea3f4b46d9d1e7d2159b6ce5ac662b670e5ff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/abcd1a63f33b879c154e1f80fc5ea3f4b46d9d1e7d2159b6ce5ac662b670e5ff/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b3fc27d6f88f12abb0f4db12508672dcd9584ab10707e7cd6f06dcebac1bbaa8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b3fc27d6f88f12abb0f4db12508672dcd9584ab10707e7cd6f06dcebac1bbaa8/userdata/shm major:0 minor:293 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c073f224d2a8cc60c80044d595d19260d941f19b426f78dc52e84033ff1afedc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c073f224d2a8cc60c80044d595d19260d941f19b426f78dc52e84033ff1afedc/userdata/shm major:0 minor:299 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c8c3670530b0c671383aade45325850e12f9fcf9f76178c2929f043d5a9b72a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c8c3670530b0c671383aade45325850e12f9fcf9f76178c2929f043d5a9b72a3/userdata/shm major:0 minor:108 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d1ce8d9ee7cab12610683fbe9731b9ea4f3d71878c552326acd5722dd5f1b61a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d1ce8d9ee7cab12610683fbe9731b9ea4f3d71878c552326acd5722dd5f1b61a/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d84a6211eba3f66c2ce7e68ab1344f23f51a23b55442aa18fdabbc1b25bc9adb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d84a6211eba3f66c2ce7e68ab1344f23f51a23b55442aa18fdabbc1b25bc9adb/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/db18d33d279edf734f31d955c318fccdcbf15241593b0786bf92a199ab2a428f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/db18d33d279edf734f31d955c318fccdcbf15241593b0786bf92a199ab2a428f/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ebc8d1a24100c636c9029b0eba8d5b6521b906cdbb84675057a80b42a0273bbc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ebc8d1a24100c636c9029b0eba8d5b6521b906cdbb84675057a80b42a0273bbc/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ee74f85cd24cd54b2a4b43b0584cf795c92f05590ca9093c69737b765e2c01d8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ee74f85cd24cd54b2a4b43b0584cf795c92f05590ca9093c69737b765e2c01d8/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~projected/kube-api-access major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~secret/serving-cert major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~projected/kube-api-access-x7pk6:{mountpoint:/var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~projected/kube-api-access-x7pk6 major:0 minor:107 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d453639-52ed-4a14-a2ee-02cf9acc2f7c/volumes/kubernetes.io~projected/kube-api-access-59kpw:{mountpoint:/var/lib/kubelet/pods/1d453639-52ed-4a14-a2ee-02cf9acc2f7c/volumes/kubernetes.io~projected/kube-api-access-59kpw major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~projected/kube-api-access-6qd6r:{mountpoint:/var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~projected/kube-api-access-6qd6r major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~projected/kube-api-access-hjsnz:{mountpoint:/var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~projected/kube-api-access-hjsnz major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~projected/kube-api-access-9pw88:{mountpoint:/var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~projected/kube-api-access-9pw88 major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~secret/serving-cert major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~projected/kube-api-access-nrc7l:{mountpoint:/var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~projected/kube-api-access-nrc7l major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a012b98-9341-41a3-9321-0a099f8bb9da/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3a012b98-9341-41a3-9321-0a099f8bb9da/volumes/kubernetes.io~projected/kube-api-access major:0 minor:74 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4085413c-9af1-4d2a-ba0f-33b42025cb7f/volumes/kubernetes.io~projected/kube-api-access-dw9lp:{mountpoint:/var/lib/kubelet/pods/4085413c-9af1-4d2a-ba0f-33b42025cb7f/volumes/kubernetes.io~projected/kube-api-access-dw9lp major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd/volumes/kubernetes.io~projected/kube-api-access-p7wrr:{mountpoint:/var/lib/kubelet/pods/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd/volumes/kubernetes.io~projected/kube-api-access-p7wrr major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~projected/kube-api-access-b6wng:{mountpoint:/var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~projected/kube-api-access-b6wng major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b035e85-b2b0-4dee-bb86-3465fc4b98a8/volumes/kubernetes.io~projected/kube-api-access-g7nmb:{mountpoint:/var/lib/kubelet/pods/4b035e85-b2b0-4dee-bb86-3465fc4b98a8/volumes/kubernetes.io~projected/kube-api-access-g7nmb major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~projected/kube-api-access-vklwz:{mountpoint:/var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~projected/kube-api-access-vklwz major:0 minor:276 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~secret/serving-cert major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~projected/kube-api-access-dfmv6:{mountpoint:/var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~projected/kube-api-access-dfmv6 major:0 minor:270 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62935559-041f-4694-9d36-adc809d079b4/volumes/kubernetes.io~projected/kube-api-access-6sq4t:{mountpoint:/var/lib/kubelet/pods/62935559-041f-4694-9d36-adc809d079b4/volumes/kubernetes.io~projected/kube-api-access-6sq4t major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~projected/kube-api-access-7bcmr:{mountpoint:/var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~projected/kube-api-access-7bcmr major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~secret/serving-cert major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~projected/kube-api-access-dqm46:{mountpoint:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~projected/kube-api-access-dqm46 major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~projected/kube-api-access-mkz65:{mountpoint:/var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~projected/kube-api-access-mkz65 major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~projected/kube-api-access-ll4rg:{mountpoint:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~projected/kube-api-access-ll4rg major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/etcd-client major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~projected/kube-api-access major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~secret/serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~projected/kube-api-access-x85fb:{mountpoint:/var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~projected/kube-api-access-x85fb major:0 minor:161 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~secret/webhook-cert major:0 minor:165 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/kube-api-access-z9vmp:{mountpoint:/var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/kube-api-access-z9vmp major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~projected/kube-api-access-8sd27:{mountpoint:/var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~projected/kube-api-access-8sd27 major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b27de289-c0f9-47ff-aac6-15b7bc1b178a/volumes/kubernetes.io~projected/kube-api-access-fx4tz:{mountpoint:/var/lib/kubelet/pods/b27de289-c0f9-47ff-aac6-15b7bc1b178a/volumes/kubernetes.io~projected/kube-api-access-fx4tz major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b27e0202-8bdb-4a36-8c3e-0c203f7665b8/volumes/kubernetes.io~projected/kube-api-access-zmvtk:{mountpoint:/var/lib/kubelet/pods/b27e0202-8bdb-4a36-8c3e-0c203f7665b8/volumes/kubernetes.io~projected/kube-api-access-zmvtk major:0 minor:73 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b28234d1-1d9a-4d9f-9ad1-e3c682bed492/volumes/kubernetes.io~projected/kube-api-access-67qzh:{mountpoint:/var/lib/kubelet/pods/b28234d1-1d9a-4d9f-9ad1-e3c682bed492/volumes/kubernetes.io~projected/kube-api-access-67qzh major:0 minor:285 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~projected/kube-api-access-qx2kd:{mountpoint:/var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~projected/kube-api-access-qx2kd major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/kube-api-access-5tklr:{mountpoint:/var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/kube-api-access-5tklr major:0 minor:281 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9d71a7a-a751-4de4-9c76-9bac85fe0177/volumes/kubernetes.io~projected/kube-api-access-jkdzb:{mountpoint:/var/lib/kubelet/pods/d9d71a7a-a751-4de4-9c76-9bac85fe0177/volumes/kubernetes.io~projected/kube-api-access-jkdzb major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~projected/kube-api-access major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~secret/serving-cert major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec7dd4ea-a139-45d4-96a4-506da1567292/volumes/kubernetes.io~projected/kube-api-access-9jt7h:{mountpoint:/var/lib/kubelet/pods/ec7dd4ea-a139-45d4-96a4-506da1567292/volumes/kubernetes.io~projected/kube-api-access-9jt7h major:0 minor:256 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/ef932d76d15b9fcafa4ecdada5146eb7a33114e691ac2b27f8675abf7e3a3bef/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/a16af00a96418556609dc90ef526c14696c94f227e4e7265923a2b5373725194/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/fe2ed4ed1da7a817736ed722aa82e07f3c95247d9e29a3d25f74e7517682643e/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/7c7796f13579aa0cadb08ce656dba232af6e4ff1f6ed4b730d899951ae3d6b87/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/1bc0f8f47d390997afa9117011219bba5f291974eb499bcaab6a8e4e5fafa031/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-123:{mountpoint:/var/lib/containers/storage/overlay/f5fd635d3f99615210d60854cf454ab9c030088d70c206be2f73898a4ac87eb1/merged major:0 minor:123 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/0a33891fe6496ce32e0be8f2ee6718c990f683198d1541ce7e2ae08a4318f273/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/7b800ce6e2a9d015ac0e8219bebae4b3c46c29e0df346da49444ac9d82c02b05/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/d3a64aa3e4789d041a748f6ef87f3e10bf9410bf07bc4b6703e7a24b552fe3b8/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/1bc982c4179cdf65f88f07988c2735edad647e73373d6ce18f1c11186c312100/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/fba8ea7ba0a42f0a33ea2d49b3a5a725461541e4dd702e56691a1bf84e04e6e8/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/04271dbe630a805da29384920d242ac67c7878c9daa9bd665ccb8fc68d8469b0/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/29b486b7a5f95df9f258990ca4a3fa47ee1d268d0d844950e4819a9751e9179c/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/6d1669f22f3208cde8f13a95834afc4d46e0afd5bf0cad3b37bd6211802052db/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/23113a94f03189db81c7b72c50bed722fc4b4759d055c8b9e6d2546dda17b33b/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/13f5260209d18658f4775950eda3971d54191e3ff69449691cb12740a0ac951f/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/ddabac877d0e80ab1e512757f8b4e698445b5ee65ec9300d3dcd9ce2bcb3a303/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/0159301ed4c5ccdccbf68bac5fd4d3104842477be093566eab08c5bcd149a491/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/97b341a413283c80289787de8addd8da7d5b590cf45085c668ab7898f86f5145/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/c70588ed54a18b38458b3efe694efcc9ae485c78ffaee231714853e183cc64c7/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/c5f95c93d935521d9fc7d8d6bea57fa822e56d5b6f345ae11bf920a4e1aafaef/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-183:{mountpoint:/var/lib/containers/storage/overlay/3b10712ffaa52e173ef0855679fe4bac23dd8d6961d46aea212eaadd2f4e7177/merged major:0 minor:183 fsType:overlay blockSize:0} overlay_0-185:{mountpoint:/var/lib/containers/storage/overlay/4a6b1d2e31be6b6e79068575dbbc43716c183e90511eb7320894712df1803469/merged major:0 minor:185 fsType:overlay blockSize:0} overlay_0-190:{mountpoint:/var/lib/containers/storage/overlay/08882a5d6969bb5f7b9bf4aed996f1f7d8d631db157a96d97f96e9501a522fc2/merged major:0 minor:190 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/2a12129fa3609d5841590e73fddef2f172dddb76f4a0fabc2705c7a4c9cbf6bc/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/38a4638b500915016346e40400d2b26f267d53031e6c38c953353c5c5e2150b4/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-205:{mountpoint:/var/lib/containers/storage/overlay/e8ef375f41c9fe2e9d4effd58f4820a921facf775706f0bd37bcfafe4b38ba00/merged major:0 minor:205 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/502508f6d9b3022ffc202085e6527e330fd7fe33db2a8ba53908535715fc38ad/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-211:{mountpoint:/var/lib/containers/storage/overlay/85f2a2cce62ac9c693c3b1e57d54c4f479c1855baf54e3275e7df7ba0317455d/merged major:0 minor:211 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/12d5ccce23d6b1a5fa2e347d64e51e4f3a23d35bf4fe79fe66b6639b8c9ea59d/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-219:{mountpoint:/var/lib/containers/storage/overlay/a0599cfd5987fbf706159f28f7c1244d43610f337088350bc7ada1382f950864/merged major:0 minor:219 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/20499207665fb6ee6a74ab5d2ce3dcf3af33f855229eb43c869f78f1289f6c56/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/7ee438d4967843b0666b19743d3d377cd7b15b7a74c1676200df78c776d966a6/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/a5c79a16dbba97e555d8ddb00efddb0b73301170f3091687ee2e6c66eed17b27/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/088bd340f4af26da5560d78b1429512665392b67a379d694c0886e90b7cb9f59/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/e19b90efdb440cb68e60d5adcfb150ec625513134f15ec1743c281aa45244c95/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/875693b97ff970cb9637e06c564318400127efba8ea2a7d76fa8b6eff8c88b94/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/589cb25e5e8adf1b16fb76498e5fd1692dfa9965e2426d8dfd21c8ba49336ba2/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/69ceae353ea73dcfd45b733cd8533a98cbd656d2495985ce5d58f811cbaf0eeb/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/e4e6fb84b5368715fbbb7051cf8faa2b873dfd763e7144b7a1fcf30cb88f8fc3/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/b7ba16ad4d6f3a22b1258f4543166039debd0aa6295e854732b1c8f05bc5cb13/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/498fa5967005e1f93e16c9e65e773b5937b3d1c738c74c3600643c6a21764f8d/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/88aeb26dfb57b3a33f7eec20be4c378e2c53279b7dc56c4d545ee42b1fe0a18a/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/25dfb27567ea86ed71e9e952b356f76b4132664598eebd0718ea42fd94156dc2/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/aad7a84d065d38c67b9518cbabe3aa341433477b7a4980eaf75436c7bd649e49/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/a76f6dc6c95ae3b93f05ba178facd50795e620ad8e9096705f734c271e54e939/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-331:{mountpoint:/var/lib/containers/storage/overlay/7fe7d495779934cd716f67b6ff87425b7d2cfb2df5ec4c882c4fd3f624c964c0/merged major:0 minor:331 fsType:overlay blockSize:0} overlay_0-337:{mountpoint:/var/lib/containers/storage/overlay/013e90bb8eb7ee12af03c7e1c3c8dcdc6287f335bd600d900f9c7d96a81e8bfd/merged major:0 minor:337 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/799b892e3e344819796f2e6fcb2e1c622304e48f9789f0fc80011ed2f1c6f3a7/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-341:{mountpoint:/var/lib/containers/storage/overlay/4d0119a34a9fed8d5fa08a49a08326583022917c10143bc36d0757bf92e2900f/merged major:0 minor:341 fsType:overlay blockSize:0} overlay_0-343:{mountpoint:/var/lib/containers/storage/overlay/1a05bca83ab0e5f6d348424f38e3a29bbd6bda0c6da3ce724c258927c78b8ebb/merged major:0 minor:343 fsType:overlay blockSize:0} overlay_0-345:{mountpoint:/var/lib/containers/storage/overlay/a505f6b3c5da374d3e26ae2921e7951fc241986a697020242f25dd8558fcb871/merged major:0 minor:345 fsType:overlay blockSize:0} overlay_0-347:{mountpoint:/var/lib/containers/storage/overlay/bb8a2bc9e5e291883de9007a9ace5193d2f476ff556905534363fa69c1742fea/merged major:0 minor:347 fsType:overlay blockSize:0} overlay_0-349:{mountpoint:/var/lib/containers/storage/overlay/d82e6eae3e28d1087e8947eabe909ce5e39020b7a88efe509d3e80dab4ef0894/merged major:0 minor:349 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/7c2a69a0fe54d803ca4719dabeae08d16d9691cbaefe827f387922cde4b179cd/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/8aa11a71b888914ea05f155f173af8f89abc5aa74ef4c49fa5bafc263065458d/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/fb3903a65efd880fbc50a4b715e4677b876b18445bb6bd99c308a51cfa4a3af2/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/ac904969dd9ccd805fa308f696edaffa5cff1cd6f9f7d5418d01e795a4dba36b/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-359:{mountpoint:/var/lib/containers/storage/overlay/d5421a644718fdc39ca6d6955c7c0f6a102eedf10af6d911983cf2a86ec5ac1b/merged major:0 minor:359 fsType:overlay blockSize:0} overlay_0-45:{mountpoint:/var/lib/containers/storage/overlay/897590ee914ebf84471d418d51b3e457e76c360b7db128bdade13be1e08fff5e/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/13968b94f0895830fab00c5301d15f4bfbc81f680d043a8b6a268f6dfcf79c30/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/6a4e742c651c7b7f37dfb59a0b9faf50d12e5525a652e1630ae4dd10275b4a1f/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/25c94b27fb20ad4342b226d3bffa85f09d7e4b714b540650d917486619fdca26/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/07e5ae6eb30bc72587015163cf6be035af8bc6397b5f57a9222c6d4c2d29c781/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/1fd0ce36fb5784312efa9ea367be24df641b985358e4a3a05fbb874c80e19002/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/b6e855ebf6004dcb272c8b369060217e225134c14c75245bbbd09aca82ad79ca/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/3839669163edaebb377271d7a110b37d2624a5b0d7e7e49b9e141715dfb990e9/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/cfa10339a81406f59114450938089ec43de367dae62b58beb10a808b7c0be5f4/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/41fb9c4b3f6cfe6c40fc2ce8932bc7b89fa959666beac5df3472639be8aad120/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/c1befb5c38dcc0d38ad5ae9451b98d0c61ff97694dfcb51b7027c731ec60656d/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/c332e04752e6998911df53cbdff6042ef26e3fb67dc2ea88b4980ff574511e81/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/911d755a19a852f9d89696b395e655ffe6084128f268180a5a42e01e9661140a/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/d94ca233ec0a7bbb87334831bb8abcd9c51f8559d45676eab77b8e7e5da227ef/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/f3c3d570dbb87276c0a0b4a60d916ce5a66d62dcc93259de3484479af6b7cfb8/merged major:0 minor:97 fsType:overlay blockSize:0}] Feb 16 20:57:08.447944 master-0 kubenswrapper[7926]: I0216 20:57:08.447311 7926 manager.go:217] Machine: {Timestamp:2026-02-16 20:57:08.446354964 +0000 UTC m=+0.081255284 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:5734463887a64126b7ce9bf415a88e99 SystemUUID:57344638-87a6-4126-b7ce-9bf415a88e99 BootID:547c3926-fc12-480e-89e3-8f59492f672a Filesystems:[{Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d84a6211eba3f66c2ce7e68ab1344f23f51a23b55442aa18fdabbc1b25bc9adb/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~projected/kube-api-access-6qd6r DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6d07de2e0be321a3aec4da12f4f04e483d7ebf0407264e8a59f6674bcacef82d/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b27e0202-8bdb-4a36-8c3e-0c203f7665b8/volumes/kubernetes.io~projected/kube-api-access-zmvtk DeviceMajor:0 DeviceMinor:73 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-219 DeviceMajor:0 DeviceMinor:219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-183 DeviceMajor:0 DeviceMinor:183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/abcd1a63f33b879c154e1f80fc5ea3f4b46d9d1e7d2159b6ce5ac662b670e5ff/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c8c3670530b0c671383aade45325850e12f9fcf9f76178c2929f043d5a9b72a3/userdata/shm DeviceMajor:0 DeviceMinor:108 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~projected/kube-api-access-nrc7l DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:165 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9d71a7a-a751-4de4-9c76-9bac85fe0177/volumes/kubernetes.io~projected/kube-api-access-jkdzb DeviceMajor:0 DeviceMinor:267 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/db18d33d279edf734f31d955c318fccdcbf15241593b0786bf92a199ab2a428f/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/392d856d7fe28dd19573efbe9000d6ecfa05d7a1577bf8dec97ef5ca7366c7d8/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~projected/kube-api-access-8sd27 DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~projected/kube-api-access-dfmv6 DeviceMajor:0 DeviceMinor:270 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4f2c49b4aa155e075775a0da6ce790eafb2a3d3e88c9dbca188493bbec98d810/userdata/shm DeviceMajor:0 DeviceMinor:300 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2dfa08dcecf95c49e6db650a7dbdf117c27ed644f23ff4e264133dd36a509d3c/userdata/shm DeviceMajor:0 DeviceMinor:305 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ebc8d1a24100c636c9029b0eba8d5b6521b906cdbb84675057a80b42a0273bbc/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-345 DeviceMajor:0 DeviceMinor:345 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-331 DeviceMajor:0 DeviceMinor:331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/76dbaddee4470107b39590128f61476392182af8f7359d5ef8d2efc6c99ae59e/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/kube-api-access-z9vmp DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:271 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-359 DeviceMajor:0 DeviceMinor:359 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:275 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/105b1eab12eec1f672058dc0900e8488b8bcca272b3ac3b2441b242d73128d7a/userdata/shm DeviceMajor:0 DeviceMinor:282 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-341 DeviceMajor:0 DeviceMinor:341 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/76e543cc5345eb5c53417c9f0b565400b03593c03aa3a1637483c029bb868ef3/userdata/shm DeviceMajor:0 DeviceMinor:166 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/91a4c15bb67084035c73bb065892be1c9d73ba9204c94c99f7433a6c3008aaff/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/75ca3e4fc5da353a0ea31c674632f3429b17eb41f067d771200d9b0aea75af5d/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-337 DeviceMajor:0 DeviceMinor:337 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0dfbee9f7528fe042540e180164336ecf2ece621fbebd18d9dde03c5a49a8d3a/userdata/shm DeviceMajor:0 DeviceMinor:126 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~projected/kube-api-access-x7pk6 DeviceMajor:0 DeviceMinor:107 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-205 DeviceMajor:0 DeviceMinor:205 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec7dd4ea-a139-45d4-96a4-506da1567292/volumes/kubernetes.io~projected/kube-api-access-9jt7h DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-347 DeviceMajor:0 DeviceMinor:347 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-211 DeviceMajor:0 DeviceMinor:211 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d1ce8d9ee7cab12610683fbe9731b9ea4f3d71878c552326acd5722dd5f1b61a/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~projected/kube-api-access-7bcmr DeviceMajor:0 DeviceMinor:268 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4b035e85-b2b0-4dee-bb86-3465fc4b98a8/volumes/kubernetes.io~projected/kube-api-access-g7nmb DeviceMajor:0 DeviceMinor:272 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/kube-api-access-5tklr DeviceMajor:0 DeviceMinor:281 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9e9fb9a8fc61dba0936cd38d7b843d3efbdecc6ba9ec73f7423569f6305a4740/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~projected/kube-api-access-hjsnz DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/62935559-041f-4694-9d36-adc809d079b4/volumes/kubernetes.io~projected/kube-api-access-6sq4t DeviceMajor:0 DeviceMinor:125 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-349 DeviceMajor:0 DeviceMinor:349 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ee74f85cd24cd54b2a4b43b0584cf795c92f05590ca9093c69737b765e2c01d8/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1e734464d78209c21a7a9eb2f6d22c8584997def010318f287f0cb7c28b7390b/userdata/shm DeviceMajor:0 DeviceMinor:303 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b27de289-c0f9-47ff-aac6-15b7bc1b178a/volumes/kubernetes.io~projected/kube-api-access-fx4tz DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~projected/kube-api-access-vklwz DeviceMajor:0 DeviceMinor:276 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d453639-52ed-4a14-a2ee-02cf9acc2f7c/volumes/kubernetes.io~projected/kube-api-access-59kpw DeviceMajor:0 DeviceMinor:135 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-185 DeviceMajor:0 DeviceMinor:185 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c073f224d2a8cc60c80044d595d19260d941f19b426f78dc52e84033ff1afedc/userdata/shm DeviceMajor:0 DeviceMinor:299 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-190 DeviceMajor:0 DeviceMinor:190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~projected/kube-api-access-9pw88 DeviceMajor:0 DeviceMinor:274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b28234d1-1d9a-4d9f-9ad1-e3c682bed492/volumes/kubernetes.io~projected/kube-api-access-67qzh DeviceMajor:0 DeviceMinor:285 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~projected/kube-api-access-dqm46 DeviceMajor:0 DeviceMinor:141 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4085413c-9af1-4d2a-ba0f-33b42025cb7f/volumes/kubernetes.io~projected/kube-api-access-dw9lp DeviceMajor:0 DeviceMinor:273 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-343 DeviceMajor:0 DeviceMinor:343 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/957c111d10e2d292281a50f8cc278f441c1f3165b491de07cd91b63ab4d96530/userdata/shm DeviceMajor:0 DeviceMinor:112 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3a012b98-9341-41a3-9321-0a099f8bb9da/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:74 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4ff1d9141076f81759691d94a098009541c5d2c236ef8864f1522766d2980580/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~projected/kube-api-access-qx2kd DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b3fc27d6f88f12abb0f4db12508672dcd9584ab10707e7cd6f06dcebac1bbaa8/userdata/shm DeviceMajor:0 DeviceMinor:293 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~projected/kube-api-access-b6wng DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~projected/kube-api-access-x85fb DeviceMajor:0 DeviceMinor:161 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~projected/kube-api-access-ll4rg DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd/volumes/kubernetes.io~projected/kube-api-access-p7wrr DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-123 DeviceMajor:0 DeviceMinor:123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~projected/kube-api-access-mkz65 DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:105b1eab12eec1f MacAddress:c6:fc:bc:f8:09:22 Speed:10000 Mtu:8900} {Name:1e734464d78209c MacAddress:c2:2e:77:bc:ce:42 Speed:10000 Mtu:8900} {Name:2dfa08dcecf95c4 MacAddress:ce:f5:60:e3:ab:ac Speed:10000 Mtu:8900} {Name:4f2c49b4aa155e0 MacAddress:8e:a0:32:da:3d:ac Speed:10000 Mtu:8900} {Name:4ff1d9141076f81 MacAddress:6a:53:76:05:32:44 Speed:10000 Mtu:8900} {Name:6d07de2e0be321a MacAddress:fa:31:87:79:9f:6d Speed:10000 Mtu:8900} {Name:75ca3e4fc5da353 MacAddress:0a:a0:ee:c1:05:ba Speed:10000 Mtu:8900} {Name:b3fc27d6f88f12a MacAddress:6a:6b:5d:93:b8:91 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:3e:37:3f:36:71:15 Speed:0 Mtu:8900} {Name:c073f224d2a8cc6 MacAddress:22:92:49:d7:5c:05 Speed:10000 Mtu:8900} {Name:d1ce8d9ee7cab12 MacAddress:9a:af:a7:de:17:51 Speed:10000 Mtu:8900} {Name:d84a6211eba3f66 MacAddress:82:4e:cf:cb:b4:80 Speed:10000 Mtu:8900} {Name:db18d33d279edf7 MacAddress:4e:a8:6c:14:7c:42 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:ff:a7:37 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:31:16:05 Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:d2:11:f7:5e:7f:07 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 20:57:08.447944 master-0 kubenswrapper[7926]: I0216 20:57:08.447851 7926 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 20:57:08.447944 master-0 kubenswrapper[7926]: I0216 20:57:08.447907 7926 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 20:57:08.448304 master-0 kubenswrapper[7926]: I0216 20:57:08.448273 7926 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 20:57:08.448462 master-0 kubenswrapper[7926]: I0216 20:57:08.448416 7926 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 20:57:08.448653 master-0 kubenswrapper[7926]: I0216 20:57:08.448458 7926 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 20:57:08.448737 master-0 kubenswrapper[7926]: I0216 20:57:08.448677 7926 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 20:57:08.448737 master-0 kubenswrapper[7926]: I0216 20:57:08.448688 7926 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 20:57:08.448737 master-0 kubenswrapper[7926]: I0216 20:57:08.448696 7926 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 20:57:08.448737 master-0 kubenswrapper[7926]: I0216 20:57:08.448718 7926 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 20:57:08.448900 master-0 kubenswrapper[7926]: I0216 20:57:08.448884 7926 state_mem.go:36] "Initialized new in-memory state store" Feb 16 20:57:08.448997 master-0 kubenswrapper[7926]: I0216 20:57:08.448972 7926 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 20:57:08.449045 master-0 kubenswrapper[7926]: I0216 20:57:08.449032 7926 kubelet.go:418] "Attempting to sync node with API server" Feb 16 20:57:08.449074 master-0 kubenswrapper[7926]: I0216 20:57:08.449047 7926 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 20:57:08.449074 master-0 kubenswrapper[7926]: I0216 20:57:08.449064 7926 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 20:57:08.449126 master-0 kubenswrapper[7926]: I0216 20:57:08.449076 7926 kubelet.go:324] "Adding apiserver pod source" Feb 16 20:57:08.449126 master-0 kubenswrapper[7926]: I0216 20:57:08.449093 7926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 20:57:08.451237 master-0 kubenswrapper[7926]: I0216 20:57:08.451208 7926 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 20:57:08.451443 master-0 kubenswrapper[7926]: I0216 20:57:08.451421 7926 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 20:57:08.451832 master-0 kubenswrapper[7926]: I0216 20:57:08.451661 7926 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 20:57:08.451944 master-0 kubenswrapper[7926]: I0216 20:57:08.451920 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 20:57:08.451944 master-0 kubenswrapper[7926]: I0216 20:57:08.451942 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 20:57:08.452001 master-0 kubenswrapper[7926]: I0216 20:57:08.451950 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 20:57:08.452001 master-0 kubenswrapper[7926]: I0216 20:57:08.451958 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 20:57:08.452001 master-0 kubenswrapper[7926]: I0216 20:57:08.451965 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 20:57:08.452001 master-0 kubenswrapper[7926]: I0216 20:57:08.451973 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 20:57:08.452001 master-0 kubenswrapper[7926]: I0216 20:57:08.451980 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 20:57:08.452001 master-0 kubenswrapper[7926]: I0216 20:57:08.451986 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 20:57:08.452001 master-0 kubenswrapper[7926]: I0216 20:57:08.451994 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 20:57:08.452001 master-0 kubenswrapper[7926]: I0216 20:57:08.452001 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 20:57:08.452254 master-0 kubenswrapper[7926]: I0216 20:57:08.452011 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 20:57:08.452254 master-0 kubenswrapper[7926]: I0216 20:57:08.452024 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 20:57:08.452254 master-0 kubenswrapper[7926]: I0216 20:57:08.452045 7926 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 20:57:08.452349 master-0 kubenswrapper[7926]: I0216 20:57:08.452334 7926 server.go:1280] "Started kubelet" Feb 16 20:57:08.453799 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 20:57:08.456390 master-0 kubenswrapper[7926]: I0216 20:57:08.456348 7926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 20:57:08.456439 master-0 kubenswrapper[7926]: I0216 20:57:08.456393 7926 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 20:57:08.457034 master-0 kubenswrapper[7926]: I0216 20:57:08.457008 7926 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 20:57:08.457104 master-0 kubenswrapper[7926]: I0216 20:57:08.457076 7926 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 20:57:08.713817 master-0 kubenswrapper[7926]: I0216 20:57:08.713629 7926 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 20:57:08.714113 master-0 kubenswrapper[7926]: I0216 20:57:08.713925 7926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 20:57:08.714113 master-0 kubenswrapper[7926]: I0216 20:57:08.713900 7926 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 20:47:22 +0000 UTC, rotation deadline is 2026-02-17 17:56:30.085937045 +0000 UTC Feb 16 20:57:08.714113 master-0 kubenswrapper[7926]: I0216 20:57:08.713971 7926 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h59m21.371969409s for next certificate rotation Feb 16 20:57:08.714567 master-0 kubenswrapper[7926]: I0216 20:57:08.714513 7926 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 20:57:08.714567 master-0 kubenswrapper[7926]: I0216 20:57:08.714551 7926 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 20:57:08.715109 master-0 kubenswrapper[7926]: E0216 20:57:08.714876 7926 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 20:57:08.715177 master-0 kubenswrapper[7926]: I0216 20:57:08.715125 7926 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 20:57:08.719539 master-0 kubenswrapper[7926]: I0216 20:57:08.719484 7926 factory.go:55] Registering systemd factory Feb 16 20:57:08.719539 master-0 kubenswrapper[7926]: I0216 20:57:08.719505 7926 server.go:449] "Adding debug handlers to kubelet server" Feb 16 20:57:08.719539 master-0 kubenswrapper[7926]: I0216 20:57:08.719516 7926 factory.go:221] Registration of the systemd container factory successfully Feb 16 20:57:08.722176 master-0 kubenswrapper[7926]: I0216 20:57:08.722101 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27c20f63-9bfb-4703-94d5-0c65475e08d1" volumeName="kubernetes.io/secret/27c20f63-9bfb-4703-94d5-0c65475e08d1-serving-cert" seLinuxMountContext="" Feb 16 20:57:08.722176 master-0 kubenswrapper[7926]: I0216 20:57:08.722162 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b035e85-b2b0-4dee-bb86-3465fc4b98a8" volumeName="kubernetes.io/projected/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-kube-api-access-g7nmb" seLinuxMountContext="" Feb 16 20:57:08.722176 master-0 kubenswrapper[7926]: I0216 20:57:08.722178 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" volumeName="kubernetes.io/configmap/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-config" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722192 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b27e0202-8bdb-4a36-8c3e-0c203f7665b8" volumeName="kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cni-binary-copy" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722204 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b27e0202-8bdb-4a36-8c3e-0c203f7665b8" volumeName="kubernetes.io/projected/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-kube-api-access-zmvtk" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722219 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" volumeName="kubernetes.io/configmap/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722232 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59237aa6-6250-4619-8ee5-abae59f04b57" volumeName="kubernetes.io/projected/59237aa6-6250-4619-8ee5-abae59f04b57-kube-api-access-vklwz" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722245 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="695549c8-d1fc-429d-9c9f-0a5915dc6074" volumeName="kubernetes.io/secret/695549c8-d1fc-429d-9c9f-0a5915dc6074-serving-cert" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722259 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69785167-b4ae-415b-bdcb-029f62effe78" volumeName="kubernetes.io/projected/69785167-b4ae-415b-bdcb-029f62effe78-kube-api-access-dqm46" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722279 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88f19cea-60ed-4977-a906-75deec51fc3d" volumeName="kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-env-overrides" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722290 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27c20f63-9bfb-4703-94d5-0c65475e08d1" volumeName="kubernetes.io/projected/27c20f63-9bfb-4703-94d5-0c65475e08d1-kube-api-access-hjsnz" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722304 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a012b98-9341-41a3-9321-0a099f8bb9da" volumeName="kubernetes.io/projected/3a012b98-9341-41a3-9321-0a099f8bb9da-kube-api-access" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722316 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="484154d0-66c8-4d0e-bf1b-f48d0abfe628" volumeName="kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovnkube-config" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722334 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="484154d0-66c8-4d0e-bf1b-f48d0abfe628" volumeName="kubernetes.io/secret/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722349 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69785167-b4ae-415b-bdcb-029f62effe78" volumeName="kubernetes.io/secret/69785167-b4ae-415b-bdcb-029f62effe78-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 20:57:08.722348 master-0 kubenswrapper[7926]: I0216 20:57:08.722363 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7333319-3fe6-4b3f-b600-6b6df49fcaff" volumeName="kubernetes.io/projected/c7333319-3fe6-4b3f-b600-6b6df49fcaff-kube-api-access-qx2kd" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722377 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cef33294-81fb-41a2-811d-2565f94514d1" volumeName="kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-bound-sa-token" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722390 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7333319-3fe6-4b3f-b600-6b6df49fcaff" volumeName="kubernetes.io/configmap/c7333319-3fe6-4b3f-b600-6b6df49fcaff-config" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722403 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4085413c-9af1-4d2a-ba0f-33b42025cb7f" volumeName="kubernetes.io/projected/4085413c-9af1-4d2a-ba0f-33b42025cb7f-kube-api-access-dw9lp" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722416 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd" volumeName="kubernetes.io/projected/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-kube-api-access-p7wrr" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722429 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59237aa6-6250-4619-8ee5-abae59f04b57" volumeName="kubernetes.io/secret/59237aa6-6250-4619-8ee5-abae59f04b57-serving-cert" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722441 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e062e07-8076-444c-b476-4eb2848e9613" volumeName="kubernetes.io/secret/5e062e07-8076-444c-b476-4eb2848e9613-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722456 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-client" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722471 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b27de289-c0f9-47ff-aac6-15b7bc1b178a" volumeName="kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722483 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" volumeName="kubernetes.io/projected/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-kube-api-access-67qzh" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722495 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9d71a7a-a751-4de4-9c76-9bac85fe0177" volumeName="kubernetes.io/projected/d9d71a7a-a751-4de4-9c76-9bac85fe0177-kube-api-access-jkdzb" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722508 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec7dd4ea-a139-45d4-96a4-506da1567292" volumeName="kubernetes.io/projected/ec7dd4ea-a139-45d4-96a4-506da1567292-kube-api-access-9jt7h" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722522 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b02b740-5698-4e9a-90fe-2873bd0b0958" volumeName="kubernetes.io/secret/0b02b740-5698-4e9a-90fe-2873bd0b0958-serving-cert" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722553 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27c20f63-9bfb-4703-94d5-0c65475e08d1" volumeName="kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-config" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722568 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27c20f63-9bfb-4703-94d5-0c65475e08d1" volumeName="kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-service-ca-bundle" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722584 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e618c5c-52be-4b52-b426-b92555dee9de" volumeName="kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-profile-collector-cert" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722595 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-serving-cert" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722607 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ab0a907-7abe-4808-ba21-bdda1506eae2" volumeName="kubernetes.io/projected/2ab0a907-7abe-4808-ba21-bdda1506eae2-kube-api-access-9pw88" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722628 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e618c5c-52be-4b52-b426-b92555dee9de" volumeName="kubernetes.io/projected/2e618c5c-52be-4b52-b426-b92555dee9de-kube-api-access-nrc7l" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722640 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62935559-041f-4694-9d36-adc809d079b4" volumeName="kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722657 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62935559-041f-4694-9d36-adc809d079b4" volumeName="kubernetes.io/projected/62935559-041f-4694-9d36-adc809d079b4-kube-api-access-6sq4t" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722686 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7adbe32-b8b9-438e-a2e3-f93146a97424" volumeName="kubernetes.io/secret/e7adbe32-b8b9-438e-a2e3-f93146a97424-serving-cert" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722697 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b02b740-5698-4e9a-90fe-2873bd0b0958" volumeName="kubernetes.io/configmap/0b02b740-5698-4e9a-90fe-2873bd0b0958-config" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722708 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a012b98-9341-41a3-9321-0a099f8bb9da" volumeName="kubernetes.io/configmap/3a012b98-9341-41a3-9321-0a099f8bb9da-service-ca" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722720 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="484154d0-66c8-4d0e-bf1b-f48d0abfe628" volumeName="kubernetes.io/projected/484154d0-66c8-4d0e-bf1b-f48d0abfe628-kube-api-access-b6wng" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722733 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4c9b781-14c0-469c-bb9e-0c3982a04520" volumeName="kubernetes.io/projected/a4c9b781-14c0-469c-bb9e-0c3982a04520-kube-api-access-8sd27" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722744 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cef33294-81fb-41a2-811d-2565f94514d1" volumeName="kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-kube-api-access-5tklr" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722756 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7adbe32-b8b9-438e-a2e3-f93146a97424" volumeName="kubernetes.io/configmap/e7adbe32-b8b9-438e-a2e3-f93146a97424-config" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722767 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2506c282-0b37-4ece-8a0c-885d0b7f7901" volumeName="kubernetes.io/projected/2506c282-0b37-4ece-8a0c-885d0b7f7901-kube-api-access-6qd6r" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722778 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9d71a7a-a751-4de4-9c76-9bac85fe0177" volumeName="kubernetes.io/configmap/d9d71a7a-a751-4de4-9c76-9bac85fe0177-iptables-alerter-script" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722788 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ab0a907-7abe-4808-ba21-bdda1506eae2" volumeName="kubernetes.io/configmap/2ab0a907-7abe-4808-ba21-bdda1506eae2-config" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722804 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59237aa6-6250-4619-8ee5-abae59f04b57" volumeName="kubernetes.io/empty-dir/59237aa6-6250-4619-8ee5-abae59f04b57-available-featuregates" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722814 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69785167-b4ae-415b-bdcb-029f62effe78" volumeName="kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-script-lib" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722826 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" volumeName="kubernetes.io/secret/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-serving-cert" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722838 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88f19cea-60ed-4977-a906-75deec51fc3d" volumeName="kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722854 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69785167-b4ae-415b-bdcb-029f62effe78" volumeName="kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-config" seLinuxMountContext="" Feb 16 20:57:08.722817 master-0 kubenswrapper[7926]: I0216 20:57:08.722866 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-config" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.722888 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0227bc-63f5-48be-95dc-1323a2b2e327" volumeName="kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-kube-api-access-z9vmp" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.722903 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7adbe32-b8b9-438e-a2e3-f93146a97424" volumeName="kubernetes.io/projected/e7adbe32-b8b9-438e-a2e3-f93146a97424-kube-api-access" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.722915 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27c20f63-9bfb-4703-94d5-0c65475e08d1" volumeName="kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-trusted-ca-bundle" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.722928 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="484154d0-66c8-4d0e-bf1b-f48d0abfe628" volumeName="kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-env-overrides" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.722940 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62935559-041f-4694-9d36-adc809d079b4" volumeName="kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-binary-copy" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.722952 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" volumeName="kubernetes.io/projected/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-kube-api-access-mkz65" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.722964 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" volumeName="kubernetes.io/projected/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-kube-api-access" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.722978 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88f19cea-60ed-4977-a906-75deec51fc3d" volumeName="kubernetes.io/projected/88f19cea-60ed-4977-a906-75deec51fc3d-kube-api-access-x85fb" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.722991 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b27e0202-8bdb-4a36-8c3e-0c203f7665b8" volumeName="kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-daemon-config" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723003 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1b61063e-775e-421d-bf73-a6ef134293a0" volumeName="kubernetes.io/secret/1b61063e-775e-421d-bf73-a6ef134293a0-metrics-tls" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723016 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" volumeName="kubernetes.io/projected/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-kube-api-access-59kpw" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723028 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2506c282-0b37-4ece-8a0c-885d0b7f7901" volumeName="kubernetes.io/configmap/2506c282-0b37-4ece-8a0c-885d0b7f7901-trusted-ca" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723039 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" volumeName="kubernetes.io/configmap/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-config" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723050 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88f19cea-60ed-4977-a906-75deec51fc3d" volumeName="kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723061 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cef33294-81fb-41a2-811d-2565f94514d1" volumeName="kubernetes.io/configmap/cef33294-81fb-41a2-811d-2565f94514d1-trusted-ca" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723072 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec7dd4ea-a139-45d4-96a4-506da1567292" volumeName="kubernetes.io/configmap/ec7dd4ea-a139-45d4-96a4-506da1567292-telemetry-config" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723084 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7333319-3fe6-4b3f-b600-6b6df49fcaff" volumeName="kubernetes.io/secret/c7333319-3fe6-4b3f-b600-6b6df49fcaff-serving-cert" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723105 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b02b740-5698-4e9a-90fe-2873bd0b0958" volumeName="kubernetes.io/projected/0b02b740-5698-4e9a-90fe-2873bd0b0958-kube-api-access" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723116 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e062e07-8076-444c-b476-4eb2848e9613" volumeName="kubernetes.io/projected/5e062e07-8076-444c-b476-4eb2848e9613-kube-api-access-dfmv6" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723132 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62935559-041f-4694-9d36-adc809d079b4" volumeName="kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-whereabouts-configmap" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723145 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="695549c8-d1fc-429d-9c9f-0a5915dc6074" volumeName="kubernetes.io/projected/695549c8-d1fc-429d-9c9f-0a5915dc6074-kube-api-access-7bcmr" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723156 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-service-ca" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723171 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/projected/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-kube-api-access-ll4rg" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723184 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0227bc-63f5-48be-95dc-1323a2b2e327" volumeName="kubernetes.io/configmap/9e0227bc-63f5-48be-95dc-1323a2b2e327-trusted-ca" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723197 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1b61063e-775e-421d-bf73-a6ef134293a0" volumeName="kubernetes.io/projected/1b61063e-775e-421d-bf73-a6ef134293a0-kube-api-access-x7pk6" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723210 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e062e07-8076-444c-b476-4eb2848e9613" volumeName="kubernetes.io/empty-dir/5e062e07-8076-444c-b476-4eb2848e9613-operand-assets" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723229 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="695549c8-d1fc-429d-9c9f-0a5915dc6074" volumeName="kubernetes.io/configmap/695549c8-d1fc-429d-9c9f-0a5915dc6074-config" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723241 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69785167-b4ae-415b-bdcb-029f62effe78" volumeName="kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-env-overrides" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723253 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ab0a907-7abe-4808-ba21-bdda1506eae2" volumeName="kubernetes.io/secret/2ab0a907-7abe-4808-ba21-bdda1506eae2-serving-cert" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723265 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-ca" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723278 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" volumeName="kubernetes.io/secret/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-serving-cert" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723291 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0227bc-63f5-48be-95dc-1323a2b2e327" volumeName="kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-bound-sa-token" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723303 7926 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4c9b781-14c0-469c-bb9e-0c3982a04520" volumeName="kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-profile-collector-cert" seLinuxMountContext="" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723314 7926 reconstruct.go:97] "Volume reconstruction finished" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.723320 7926 reconciler.go:26] "Reconciler: start to sync state" Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.724048 7926 factory.go:153] Registering CRI-O factory Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.724070 7926 factory.go:221] Registration of the crio container factory successfully Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.724146 7926 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.724170 7926 factory.go:103] Registering Raw factory Feb 16 20:57:08.724659 master-0 kubenswrapper[7926]: I0216 20:57:08.724187 7926 manager.go:1196] Started watching for new ooms in manager Feb 16 20:57:08.726107 master-0 kubenswrapper[7926]: I0216 20:57:08.724753 7926 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 20:57:08.726107 master-0 kubenswrapper[7926]: I0216 20:57:08.724829 7926 manager.go:319] Starting recovery of all containers Feb 16 20:57:08.728761 master-0 kubenswrapper[7926]: E0216 20:57:08.728708 7926 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 16 20:57:08.729255 master-0 kubenswrapper[7926]: I0216 20:57:08.729226 7926 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 20:57:08.730253 master-0 kubenswrapper[7926]: I0216 20:57:08.730220 7926 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 20:57:08.730514 master-0 kubenswrapper[7926]: I0216 20:57:08.730484 7926 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 20:57:08.735573 master-0 kubenswrapper[7926]: I0216 20:57:08.735480 7926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 20:57:08.737195 master-0 kubenswrapper[7926]: I0216 20:57:08.737160 7926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 20:57:08.737255 master-0 kubenswrapper[7926]: I0216 20:57:08.737200 7926 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 20:57:08.737255 master-0 kubenswrapper[7926]: I0216 20:57:08.737222 7926 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 20:57:08.737423 master-0 kubenswrapper[7926]: E0216 20:57:08.737386 7926 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 20:57:08.738580 master-0 kubenswrapper[7926]: I0216 20:57:08.738526 7926 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 20:57:08.750183 master-0 kubenswrapper[7926]: I0216 20:57:08.750112 7926 generic.go:334] "Generic (PLEG): container finished" podID="69785167-b4ae-415b-bdcb-029f62effe78" containerID="d7022d510b5111f523030386d2b2e3f81b8551ed9e8be0ecf6a80ac34378ca5e" exitCode=0 Feb 16 20:57:08.756706 master-0 kubenswrapper[7926]: I0216 20:57:08.756632 7926 generic.go:334] "Generic (PLEG): container finished" podID="59237aa6-6250-4619-8ee5-abae59f04b57" containerID="61defc533791601dd8ff505e57b675aac367c1fe0144fefa77509ab84c3b3331" exitCode=0 Feb 16 20:57:08.760808 master-0 kubenswrapper[7926]: I0216 20:57:08.760764 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="3bfeaa29dd18a9c052679918402bc8ad83eaec394fa47c6b58ac63f5cfd4bce4" exitCode=1 Feb 16 20:57:08.765144 master-0 kubenswrapper[7926]: I0216 20:57:08.765102 7926 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="2dca4633ccf4f45bb4ab9181df018e7f5607187bc3ce7c60613bb7c75dbb3049" exitCode=0 Feb 16 20:57:08.774457 master-0 kubenswrapper[7926]: I0216 20:57:08.774404 7926 generic.go:334] "Generic (PLEG): container finished" podID="cc1d7efb-93cd-4f49-ace0-2144532cae9e" containerID="ffb676f67b4284795ed9016656d43ca3b8d0c5d83ea808c4b84c0f1bccf3bdd0" exitCode=0 Feb 16 20:57:08.776419 master-0 kubenswrapper[7926]: I0216 20:57:08.776386 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 20:57:08.776931 master-0 kubenswrapper[7926]: I0216 20:57:08.776901 7926 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="34cedb032f29de87a57c244cfdac89c6368a83bd489ea19dfd7e57624682d8a7" exitCode=1 Feb 16 20:57:08.777010 master-0 kubenswrapper[7926]: I0216 20:57:08.776998 7926 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="1c9bfe3aaee57fe250198f3484327052043637146bacc2e7c8dfb22afd3d4c6c" exitCode=0 Feb 16 20:57:08.782782 master-0 kubenswrapper[7926]: I0216 20:57:08.782719 7926 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="0213e2c5badfad1c445275191896cc5e9028427f3090c086deb48f44170a8559" exitCode=0 Feb 16 20:57:08.782782 master-0 kubenswrapper[7926]: I0216 20:57:08.782759 7926 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="c4606e99d38ef423f540d128546208027e050c83b7e8385117d1ac9efe8a49dd" exitCode=0 Feb 16 20:57:08.782782 master-0 kubenswrapper[7926]: I0216 20:57:08.782767 7926 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="4c7a7e08f576cfd5e11632a9ba0076da03fa44265bff3bddab5c897154cfdd10" exitCode=0 Feb 16 20:57:08.782782 master-0 kubenswrapper[7926]: I0216 20:57:08.782776 7926 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="181fe628d311f1cd1061bd5a4ed240a9f0bc9297d01fb093f8d0beb40911a4e0" exitCode=0 Feb 16 20:57:08.782782 master-0 kubenswrapper[7926]: I0216 20:57:08.782782 7926 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="764147f0ae46dce8cfdba6d43c9720c0e223cc03d6732303325fb33cc0d7abd0" exitCode=0 Feb 16 20:57:08.782782 master-0 kubenswrapper[7926]: I0216 20:57:08.782789 7926 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="2485cbe452aed6f7043c33dccc17caa48675a3e464f4b79370075f51c4973793" exitCode=0 Feb 16 20:57:08.788872 master-0 kubenswrapper[7926]: I0216 20:57:08.788834 7926 generic.go:334] "Generic (PLEG): container finished" podID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerID="fa302e5e493b2dfa58bae20f0ca7e4cc187d6d95bf769b99faf796dd889e114f" exitCode=0 Feb 16 20:57:08.837707 master-0 kubenswrapper[7926]: E0216 20:57:08.837559 7926 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 20:57:08.857777 master-0 kubenswrapper[7926]: I0216 20:57:08.857477 7926 manager.go:324] Recovery completed Feb 16 20:57:09.037925 master-0 kubenswrapper[7926]: E0216 20:57:09.037889 7926 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 20:57:09.159408 master-0 kubenswrapper[7926]: I0216 20:57:09.159143 7926 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 20:57:09.159408 master-0 kubenswrapper[7926]: I0216 20:57:09.159171 7926 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 20:57:09.159408 master-0 kubenswrapper[7926]: I0216 20:57:09.159213 7926 state_mem.go:36] "Initialized new in-memory state store" Feb 16 20:57:09.160053 master-0 kubenswrapper[7926]: I0216 20:57:09.159463 7926 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 16 20:57:09.160053 master-0 kubenswrapper[7926]: I0216 20:57:09.159484 7926 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 16 20:57:09.160053 master-0 kubenswrapper[7926]: I0216 20:57:09.159520 7926 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 16 20:57:09.160053 master-0 kubenswrapper[7926]: I0216 20:57:09.159529 7926 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 16 20:57:09.160053 master-0 kubenswrapper[7926]: I0216 20:57:09.159537 7926 policy_none.go:49] "None policy: Start" Feb 16 20:57:09.162618 master-0 kubenswrapper[7926]: I0216 20:57:09.162574 7926 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 20:57:09.162764 master-0 kubenswrapper[7926]: I0216 20:57:09.162629 7926 state_mem.go:35] "Initializing new in-memory state store" Feb 16 20:57:09.163376 master-0 kubenswrapper[7926]: I0216 20:57:09.163163 7926 state_mem.go:75] "Updated machine memory state" Feb 16 20:57:09.163376 master-0 kubenswrapper[7926]: I0216 20:57:09.163184 7926 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 16 20:57:09.174338 master-0 kubenswrapper[7926]: I0216 20:57:09.174265 7926 manager.go:334] "Starting Device Plugin manager" Feb 16 20:57:09.174578 master-0 kubenswrapper[7926]: I0216 20:57:09.174405 7926 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 20:57:09.174578 master-0 kubenswrapper[7926]: I0216 20:57:09.174424 7926 server.go:79] "Starting device plugin registration server" Feb 16 20:57:09.174982 master-0 kubenswrapper[7926]: I0216 20:57:09.174949 7926 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 20:57:09.175037 master-0 kubenswrapper[7926]: I0216 20:57:09.174973 7926 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 20:57:09.175238 master-0 kubenswrapper[7926]: I0216 20:57:09.175206 7926 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 20:57:09.175344 master-0 kubenswrapper[7926]: I0216 20:57:09.175320 7926 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 20:57:09.175344 master-0 kubenswrapper[7926]: I0216 20:57:09.175335 7926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 20:57:09.276297 master-0 kubenswrapper[7926]: I0216 20:57:09.276205 7926 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 20:57:09.278289 master-0 kubenswrapper[7926]: I0216 20:57:09.278250 7926 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 20:57:09.278367 master-0 kubenswrapper[7926]: I0216 20:57:09.278296 7926 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 20:57:09.278367 master-0 kubenswrapper[7926]: I0216 20:57:09.278310 7926 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 20:57:09.278367 master-0 kubenswrapper[7926]: I0216 20:57:09.278359 7926 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 20:57:09.291181 master-0 kubenswrapper[7926]: I0216 20:57:09.291055 7926 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 16 20:57:09.291181 master-0 kubenswrapper[7926]: I0216 20:57:09.291184 7926 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 20:57:09.542598 master-0 kubenswrapper[7926]: I0216 20:57:09.542340 7926 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 20:57:09.546081 master-0 kubenswrapper[7926]: I0216 20:57:09.542812 7926 apiserver.go:52] "Watching apiserver" Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.545835 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546401 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"7e0471aa80085ed85cb40c9b3c8ab6f80ea1655f1734a052a840a434c72c54f4"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546419 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"3bfeaa29dd18a9c052679918402bc8ad83eaec394fa47c6b58ac63f5cfd4bce4"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546435 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"76dbaddee4470107b39590128f61476392182af8f7359d5ef8d2efc6c99ae59e"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546455 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"5e7c38ffeebe9ecd58ceaa66f0e5d878c7328cfe4f821ef677aab62956457cf2"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546466 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"917b8b89b52fc1ea526b8dd828bd51e4ae2f231263633fb2c2bfa2d5e4419132"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546478 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerDied","Data":"2dca4633ccf4f45bb4ab9181df018e7f5607187bc3ce7c60613bb7c75dbb3049"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546501 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"91a4c15bb67084035c73bb065892be1c9d73ba9204c94c99f7433a6c3008aaff"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546513 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546525 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546537 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"ee74f85cd24cd54b2a4b43b0584cf795c92f05590ca9093c69737b765e2c01d8"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546557 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27cab70a8212b927f3a896bd289c92658d551d4d6062085d094a691e761d0282" Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546569 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"9ea7eb4c5b7177a7e2ac3c5dca26fbf5f811d30a8d29e8b826572146fe10d264"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546585 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"34cedb032f29de87a57c244cfdac89c6368a83bd489ea19dfd7e57624682d8a7"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546596 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"1c9bfe3aaee57fe250198f3484327052043637146bacc2e7c8dfb22afd3d4c6c"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546608 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546623 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"f06b93dc1f7853f1547eea454f40e687d56a498fbbe7a281e785547401b0538b"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546635 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"392d856d7fe28dd19573efbe9000d6ecfa05d7a1577bf8dec97ef5ca7366c7d8"} Feb 16 20:57:09.547554 master-0 kubenswrapper[7926]: I0216 20:57:09.546693 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5b179a0033062cd2b178034bcb5784ab1edcaef771f5cac5fd7b9ba67359d1" Feb 16 20:57:09.552354 master-0 kubenswrapper[7926]: I0216 20:57:09.552307 7926 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 20:57:09.553039 master-0 kubenswrapper[7926]: I0216 20:57:09.552978 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p","openshift-multus/multus-65zz6","openshift-multus/multus-admission-controller-7c64d55f8-z46jt","openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g","kube-system/bootstrap-kube-controller-manager-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq","openshift-multus/network-metrics-daemon-42bw7","openshift-network-diagnostics/network-check-target-68c25","openshift-network-operator/network-operator-6fcf4c966-n4hfs","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd","assisted-installer/assisted-installer-controller-6llwf","kube-system/bootstrap-kube-scheduler-master-0","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn","openshift-multus/multus-additional-cni-plugins-8zsx4","openshift-ovn-kubernetes/ovnkube-node-z8h4n","openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d","openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4","openshift-dns-operator/dns-operator-86b8869b79-cdltb","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb","openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl","openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8","openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-operator/iptables-alerter-b68cj","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv","openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96","openshift-network-node-identity/network-node-identity-tpj6f","openshift-authentication-operator/authentication-operator-755d954778-8gnq5","openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft"] Feb 16 20:57:09.553246 master-0 kubenswrapper[7926]: I0216 20:57:09.553211 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 20:57:09.553437 master-0 kubenswrapper[7926]: I0216 20:57:09.553404 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:09.553769 master-0 kubenswrapper[7926]: I0216 20:57:09.553739 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:09.555695 master-0 kubenswrapper[7926]: I0216 20:57:09.555618 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.556134 master-0 kubenswrapper[7926]: I0216 20:57:09.555996 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 20:57:09.556827 master-0 kubenswrapper[7926]: I0216 20:57:09.556188 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 20:57:09.556827 master-0 kubenswrapper[7926]: I0216 20:57:09.556239 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 20:57:09.556827 master-0 kubenswrapper[7926]: I0216 20:57:09.556378 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 20:57:09.556827 master-0 kubenswrapper[7926]: I0216 20:57:09.556447 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 20:57:09.556827 master-0 kubenswrapper[7926]: I0216 20:57:09.556574 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 20:57:09.556827 master-0 kubenswrapper[7926]: I0216 20:57:09.556691 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 20:57:09.557052 master-0 kubenswrapper[7926]: I0216 20:57:09.556923 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:09.557052 master-0 kubenswrapper[7926]: I0216 20:57:09.556959 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:09.557114 master-0 kubenswrapper[7926]: I0216 20:57:09.557098 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.557146 master-0 kubenswrapper[7926]: I0216 20:57:09.557129 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:09.557177 master-0 kubenswrapper[7926]: I0216 20:57:09.557147 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:09.557210 master-0 kubenswrapper[7926]: I0216 20:57:09.557159 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:09.557281 master-0 kubenswrapper[7926]: I0216 20:57:09.557260 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:09.557361 master-0 kubenswrapper[7926]: I0216 20:57:09.557341 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:09.558808 master-0 kubenswrapper[7926]: I0216 20:57:09.558101 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 20:57:09.558808 master-0 kubenswrapper[7926]: I0216 20:57:09.558575 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 20:57:09.558808 master-0 kubenswrapper[7926]: I0216 20:57:09.558595 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 20:57:09.558808 master-0 kubenswrapper[7926]: I0216 20:57:09.558627 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.558808 master-0 kubenswrapper[7926]: I0216 20:57:09.558666 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 20:57:09.558808 master-0 kubenswrapper[7926]: I0216 20:57:09.558674 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 20:57:09.558808 master-0 kubenswrapper[7926]: I0216 20:57:09.558694 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 20:57:09.560978 master-0 kubenswrapper[7926]: I0216 20:57:09.558596 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 20:57:09.560978 master-0 kubenswrapper[7926]: I0216 20:57:09.558730 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 20:57:09.560978 master-0 kubenswrapper[7926]: I0216 20:57:09.558758 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 20:57:09.561479 master-0 kubenswrapper[7926]: I0216 20:57:09.558753 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 20:57:09.561479 master-0 kubenswrapper[7926]: I0216 20:57:09.558952 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 20:57:09.561479 master-0 kubenswrapper[7926]: I0216 20:57:09.559100 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.563925 master-0 kubenswrapper[7926]: I0216 20:57:09.563514 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:09.563925 master-0 kubenswrapper[7926]: I0216 20:57:09.563674 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 20:57:09.564407 master-0 kubenswrapper[7926]: I0216 20:57:09.563999 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 20:57:09.564407 master-0 kubenswrapper[7926]: I0216 20:57:09.564261 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 20:57:09.564869 master-0 kubenswrapper[7926]: I0216 20:57:09.564824 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 20:57:09.564964 master-0 kubenswrapper[7926]: I0216 20:57:09.564940 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 20:57:09.565123 master-0 kubenswrapper[7926]: I0216 20:57:09.565095 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 20:57:09.565306 master-0 kubenswrapper[7926]: I0216 20:57:09.565273 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 20:57:09.565346 master-0 kubenswrapper[7926]: I0216 20:57:09.565324 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:09.566889 master-0 kubenswrapper[7926]: I0216 20:57:09.566860 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:57:09.567980 master-0 kubenswrapper[7926]: I0216 20:57:09.567948 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 20:57:09.569268 master-0 kubenswrapper[7926]: I0216 20:57:09.569221 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.569903 master-0 kubenswrapper[7926]: I0216 20:57:09.569866 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 20:57:09.570085 master-0 kubenswrapper[7926]: I0216 20:57:09.570062 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 20:57:09.570192 master-0 kubenswrapper[7926]: I0216 20:57:09.570146 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 20:57:09.570260 master-0 kubenswrapper[7926]: I0216 20:57:09.570239 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.570292 master-0 kubenswrapper[7926]: I0216 20:57:09.570257 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 20:57:09.570529 master-0 kubenswrapper[7926]: I0216 20:57:09.570492 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 20:57:09.570957 master-0 kubenswrapper[7926]: I0216 20:57:09.570924 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 20:57:09.571265 master-0 kubenswrapper[7926]: I0216 20:57:09.571239 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 20:57:09.571301 master-0 kubenswrapper[7926]: I0216 20:57:09.571284 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 20:57:09.571414 master-0 kubenswrapper[7926]: I0216 20:57:09.571389 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 20:57:09.571517 master-0 kubenswrapper[7926]: I0216 20:57:09.571477 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 20:57:09.571556 master-0 kubenswrapper[7926]: I0216 20:57:09.571522 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 20:57:09.571556 master-0 kubenswrapper[7926]: I0216 20:57:09.571530 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 20:57:09.571556 master-0 kubenswrapper[7926]: I0216 20:57:09.571555 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 20:57:09.571648 master-0 kubenswrapper[7926]: I0216 20:57:09.571574 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 20:57:09.571871 master-0 kubenswrapper[7926]: I0216 20:57:09.571842 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 20:57:09.571915 master-0 kubenswrapper[7926]: I0216 20:57:09.571882 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.571949 master-0 kubenswrapper[7926]: I0216 20:57:09.571844 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 20:57:09.572016 master-0 kubenswrapper[7926]: I0216 20:57:09.571997 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.572063 master-0 kubenswrapper[7926]: I0216 20:57:09.572043 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 20:57:09.572063 master-0 kubenswrapper[7926]: I0216 20:57:09.572058 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.572193 master-0 kubenswrapper[7926]: I0216 20:57:09.572167 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 20:57:09.572238 master-0 kubenswrapper[7926]: I0216 20:57:09.572198 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 20:57:09.572272 master-0 kubenswrapper[7926]: I0216 20:57:09.572237 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 20:57:09.572348 master-0 kubenswrapper[7926]: I0216 20:57:09.572327 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 20:57:09.572524 master-0 kubenswrapper[7926]: I0216 20:57:09.572474 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 20:57:09.574725 master-0 kubenswrapper[7926]: I0216 20:57:09.574695 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 20:57:09.574949 master-0 kubenswrapper[7926]: I0216 20:57:09.574925 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 20:57:09.575148 master-0 kubenswrapper[7926]: I0216 20:57:09.575122 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 20:57:09.575516 master-0 kubenswrapper[7926]: I0216 20:57:09.575490 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 20:57:09.576813 master-0 kubenswrapper[7926]: I0216 20:57:09.576776 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 20:57:09.577992 master-0 kubenswrapper[7926]: I0216 20:57:09.577959 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 20:57:09.581931 master-0 kubenswrapper[7926]: I0216 20:57:09.581885 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 20:57:09.582410 master-0 kubenswrapper[7926]: I0216 20:57:09.582129 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.585236 master-0 kubenswrapper[7926]: I0216 20:57:09.585214 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 20:57:09.585338 master-0 kubenswrapper[7926]: I0216 20:57:09.585316 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 20:57:09.585338 master-0 kubenswrapper[7926]: I0216 20:57:09.585322 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.585410 master-0 kubenswrapper[7926]: I0216 20:57:09.585363 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.585410 master-0 kubenswrapper[7926]: I0216 20:57:09.585387 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 20:57:09.585966 master-0 kubenswrapper[7926]: I0216 20:57:09.585950 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 20:57:09.586348 master-0 kubenswrapper[7926]: I0216 20:57:09.586321 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 20:57:09.586348 master-0 kubenswrapper[7926]: I0216 20:57:09.586338 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.586433 master-0 kubenswrapper[7926]: I0216 20:57:09.586402 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 20:57:09.586484 master-0 kubenswrapper[7926]: I0216 20:57:09.586464 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 20:57:09.586520 master-0 kubenswrapper[7926]: I0216 20:57:09.586493 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 20:57:09.586520 master-0 kubenswrapper[7926]: I0216 20:57:09.586501 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 20:57:09.586581 master-0 kubenswrapper[7926]: I0216 20:57:09.586573 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 20:57:09.586706 master-0 kubenswrapper[7926]: I0216 20:57:09.586686 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 20:57:09.586755 master-0 kubenswrapper[7926]: I0216 20:57:09.586724 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 20:57:09.587997 master-0 kubenswrapper[7926]: I0216 20:57:09.587968 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 20:57:09.588635 master-0 kubenswrapper[7926]: I0216 20:57:09.588606 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 20:57:09.589438 master-0 kubenswrapper[7926]: I0216 20:57:09.589408 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 20:57:09.589477 master-0 kubenswrapper[7926]: I0216 20:57:09.589448 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 20:57:09.589685 master-0 kubenswrapper[7926]: I0216 20:57:09.589666 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 20:57:09.589824 master-0 kubenswrapper[7926]: I0216 20:57:09.589796 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 20:57:09.589972 master-0 kubenswrapper[7926]: I0216 20:57:09.589945 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 20:57:09.590198 master-0 kubenswrapper[7926]: I0216 20:57:09.590173 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 20:57:09.590395 master-0 kubenswrapper[7926]: I0216 20:57:09.590370 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 20:57:09.590445 master-0 kubenswrapper[7926]: I0216 20:57:09.590424 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 20:57:09.590478 master-0 kubenswrapper[7926]: I0216 20:57:09.590465 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 20:57:09.591490 master-0 kubenswrapper[7926]: I0216 20:57:09.591462 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 20:57:09.594578 master-0 kubenswrapper[7926]: I0216 20:57:09.594549 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 20:57:09.598410 master-0 kubenswrapper[7926]: I0216 20:57:09.598382 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 20:57:09.601994 master-0 kubenswrapper[7926]: I0216 20:57:09.601961 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 20:57:09.604509 master-0 kubenswrapper[7926]: I0216 20:57:09.604478 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 20:57:09.605975 master-0 kubenswrapper[7926]: I0216 20:57:09.605952 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 20:57:09.618394 master-0 kubenswrapper[7926]: I0216 20:57:09.618357 7926 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 20:57:09.625617 master-0 kubenswrapper[7926]: I0216 20:57:09.625592 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 20:57:09.643479 master-0 kubenswrapper[7926]: I0216 20:57:09.643447 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-daemon-config\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.643597 master-0 kubenswrapper[7926]: I0216 20:57:09.643487 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqm46\" (UniqueName: \"kubernetes.io/projected/69785167-b4ae-415b-bdcb-029f62effe78-kube-api-access-dqm46\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.643597 master-0 kubenswrapper[7926]: I0216 20:57:09.643516 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-config\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:57:09.643762 master-0 kubenswrapper[7926]: I0216 20:57:09.643729 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-multus\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.643814 master-0 kubenswrapper[7926]: I0216 20:57:09.643788 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.643877 master-0 kubenswrapper[7926]: I0216 20:57:09.643820 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:57:09.643930 master-0 kubenswrapper[7926]: I0216 20:57:09.643858 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c20f63-9bfb-4703-94d5-0c65475e08d1-serving-cert\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:57:09.643930 master-0 kubenswrapper[7926]: I0216 20:57:09.643920 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjsnz\" (UniqueName: \"kubernetes.io/projected/27c20f63-9bfb-4703-94d5-0c65475e08d1-kube-api-access-hjsnz\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:57:09.644023 master-0 kubenswrapper[7926]: I0216 20:57:09.643968 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:09.644023 master-0 kubenswrapper[7926]: I0216 20:57:09.643994 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-system-cni-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.644112 master-0 kubenswrapper[7926]: I0216 20:57:09.644021 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jt7h\" (UniqueName: \"kubernetes.io/projected/ec7dd4ea-a139-45d4-96a4-506da1567292-kube-api-access-9jt7h\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:09.644112 master-0 kubenswrapper[7926]: I0216 20:57:09.644049 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vklwz\" (UniqueName: \"kubernetes.io/projected/59237aa6-6250-4619-8ee5-abae59f04b57-kube-api-access-vklwz\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:57:09.644196 master-0 kubenswrapper[7926]: I0216 20:57:09.644125 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7nmb\" (UniqueName: \"kubernetes.io/projected/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-kube-api-access-g7nmb\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:09.644196 master-0 kubenswrapper[7926]: I0216 20:57:09.644157 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b61063e-775e-421d-bf73-a6ef134293a0-metrics-tls\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:57:09.644196 master-0 kubenswrapper[7926]: I0216 20:57:09.644183 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-slash\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.644381 master-0 kubenswrapper[7926]: I0216 20:57:09.644203 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-log-socket\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.644381 master-0 kubenswrapper[7926]: I0216 20:57:09.644205 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-config\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:57:09.644381 master-0 kubenswrapper[7926]: I0216 20:57:09.644248 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:09.644381 master-0 kubenswrapper[7926]: I0216 20:57:09.644270 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a012b98-9341-41a3-9321-0a099f8bb9da-kube-api-access\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.644381 master-0 kubenswrapper[7926]: I0216 20:57:09.644291 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.644381 master-0 kubenswrapper[7926]: I0216 20:57:09.644311 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-profile-collector-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:09.644381 master-0 kubenswrapper[7926]: I0216 20:57:09.644332 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-config\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:57:09.644381 master-0 kubenswrapper[7926]: I0216 20:57:09.644338 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c20f63-9bfb-4703-94d5-0c65475e08d1-serving-cert\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:57:09.644724 master-0 kubenswrapper[7926]: I0216 20:57:09.644354 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ec7dd4ea-a139-45d4-96a4-506da1567292-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:09.644724 master-0 kubenswrapper[7926]: I0216 20:57:09.644599 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-config\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:57:09.644724 master-0 kubenswrapper[7926]: I0216 20:57:09.644617 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b61063e-775e-421d-bf73-a6ef134293a0-metrics-tls\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:57:09.644724 master-0 kubenswrapper[7926]: I0216 20:57:09.644654 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab0a907-7abe-4808-ba21-bdda1506eae2-config\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:57:09.644724 master-0 kubenswrapper[7926]: I0216 20:57:09.644722 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-profile-collector-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:09.644724 master-0 kubenswrapper[7926]: I0216 20:57:09.644720 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkdzb\" (UniqueName: \"kubernetes.io/projected/d9d71a7a-a751-4de4-9c76-9bac85fe0177-kube-api-access-jkdzb\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:57:09.645127 master-0 kubenswrapper[7926]: I0216 20:57:09.644742 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ec7dd4ea-a139-45d4-96a4-506da1567292-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:09.645254 master-0 kubenswrapper[7926]: I0216 20:57:09.644772 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2506c282-0b37-4ece-8a0c-885d0b7f7901-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:09.645322 master-0 kubenswrapper[7926]: I0216 20:57:09.645290 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-system-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.645368 master-0 kubenswrapper[7926]: I0216 20:57:09.645343 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.645421 master-0 kubenswrapper[7926]: I0216 20:57:09.645375 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-etc-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.645457 master-0 kubenswrapper[7926]: I0216 20:57:09.645431 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7adbe32-b8b9-438e-a2e3-f93146a97424-config\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:57:09.645950 master-0 kubenswrapper[7926]: I0216 20:57:09.645931 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7adbe32-b8b9-438e-a2e3-f93146a97424-config\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:57:09.646204 master-0 kubenswrapper[7926]: I0216 20:57:09.646188 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 20:57:09.646375 master-0 kubenswrapper[7926]: I0216 20:57:09.646305 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2506c282-0b37-4ece-8a0c-885d0b7f7901-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:09.646442 master-0 kubenswrapper[7926]: I0216 20:57:09.646412 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-daemon-config\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.646573 master-0 kubenswrapper[7926]: I0216 20:57:09.645481 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.646619 master-0 kubenswrapper[7926]: I0216 20:57:09.646590 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b02b740-5698-4e9a-90fe-2873bd0b0958-config\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:57:09.646716 master-0 kubenswrapper[7926]: I0216 20:57:09.646684 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-conf-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.646787 master-0 kubenswrapper[7926]: I0216 20:57:09.646773 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:09.646993 master-0 kubenswrapper[7926]: I0216 20:57:09.646933 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-systemd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.647087 master-0 kubenswrapper[7926]: I0216 20:57:09.647042 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.647190 master-0 kubenswrapper[7926]: I0216 20:57:09.647156 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-script-lib\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.647250 master-0 kubenswrapper[7926]: I0216 20:57:09.647202 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:09.647285 master-0 kubenswrapper[7926]: I0216 20:57:09.647245 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7adbe32-b8b9-438e-a2e3-f93146a97424-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:57:09.647342 master-0 kubenswrapper[7926]: I0216 20:57:09.647323 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-bound-sa-token\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:09.647434 master-0 kubenswrapper[7926]: I0216 20:57:09.647396 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx2kd\" (UniqueName: \"kubernetes.io/projected/c7333319-3fe6-4b3f-b600-6b6df49fcaff-kube-api-access-qx2kd\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:57:09.647543 master-0 kubenswrapper[7926]: I0216 20:57:09.647494 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7333319-3fe6-4b3f-b600-6b6df49fcaff-config\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:57:09.647543 master-0 kubenswrapper[7926]: I0216 20:57:09.647524 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b02b740-5698-4e9a-90fe-2873bd0b0958-config\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:57:09.647686 master-0 kubenswrapper[7926]: I0216 20:57:09.647584 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:57:09.647732 master-0 kubenswrapper[7926]: I0216 20:57:09.647702 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-bin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.647839 master-0 kubenswrapper[7926]: I0216 20:57:09.647785 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:09.647926 master-0 kubenswrapper[7926]: I0216 20:57:09.647868 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bcmr\" (UniqueName: \"kubernetes.io/projected/695549c8-d1fc-429d-9c9f-0a5915dc6074-kube-api-access-7bcmr\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:57:09.647970 master-0 kubenswrapper[7926]: I0216 20:57:09.647953 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59237aa6-6250-4619-8ee5-abae59f04b57-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:57:09.648058 master-0 kubenswrapper[7926]: I0216 20:57:09.648027 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tklr\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-kube-api-access-5tklr\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:09.648165 master-0 kubenswrapper[7926]: I0216 20:57:09.648114 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-service-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:57:09.648252 master-0 kubenswrapper[7926]: I0216 20:57:09.648204 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-config\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:57:09.648331 master-0 kubenswrapper[7926]: I0216 20:57:09.648282 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/59237aa6-6250-4619-8ee5-abae59f04b57-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:57:09.648447 master-0 kubenswrapper[7926]: I0216 20:57:09.648377 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5e062e07-8076-444c-b476-4eb2848e9613-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:57:09.648495 master-0 kubenswrapper[7926]: I0216 20:57:09.648292 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7333319-3fe6-4b3f-b600-6b6df49fcaff-config\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:57:09.648794 master-0 kubenswrapper[7926]: I0216 20:57:09.648777 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-config\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:57:09.648992 master-0 kubenswrapper[7926]: I0216 20:57:09.648947 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-service-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:57:09.649101 master-0 kubenswrapper[7926]: I0216 20:57:09.649072 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:57:09.649171 master-0 kubenswrapper[7926]: I0216 20:57:09.649142 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/695549c8-d1fc-429d-9c9f-0a5915dc6074-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:57:09.649226 master-0 kubenswrapper[7926]: I0216 20:57:09.649174 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5e062e07-8076-444c-b476-4eb2848e9613-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:57:09.649262 master-0 kubenswrapper[7926]: I0216 20:57:09.649213 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pw88\" (UniqueName: \"kubernetes.io/projected/2ab0a907-7abe-4808-ba21-bdda1506eae2-kube-api-access-9pw88\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:57:09.649294 master-0 kubenswrapper[7926]: I0216 20:57:09.649276 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b02b740-5698-4e9a-90fe-2873bd0b0958-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:57:09.649360 master-0 kubenswrapper[7926]: I0216 20:57:09.649329 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:57:09.649410 master-0 kubenswrapper[7926]: I0216 20:57:09.649392 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/695549c8-d1fc-429d-9c9f-0a5915dc6074-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:57:09.649490 master-0 kubenswrapper[7926]: I0216 20:57:09.649474 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:57:09.649555 master-0 kubenswrapper[7926]: I0216 20:57:09.649537 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:09.649604 master-0 kubenswrapper[7926]: I0216 20:57:09.649590 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-serving-cert\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:09.649639 master-0 kubenswrapper[7926]: I0216 20:57:09.649623 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.649735 master-0 kubenswrapper[7926]: I0216 20:57:09.649722 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.649792 master-0 kubenswrapper[7926]: I0216 20:57:09.649780 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-hostroot\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.649847 master-0 kubenswrapper[7926]: I0216 20:57:09.649807 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.649885 master-0 kubenswrapper[7926]: I0216 20:57:09.649852 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-config\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:09.649885 master-0 kubenswrapper[7926]: I0216 20:57:09.649880 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.649944 master-0 kubenswrapper[7926]: I0216 20:57:09.649911 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/59237aa6-6250-4619-8ee5-abae59f04b57-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:57:09.650134 master-0 kubenswrapper[7926]: I0216 20:57:09.650117 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-serving-cert\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:09.650337 master-0 kubenswrapper[7926]: I0216 20:57:09.650321 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-config\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:09.650565 master-0 kubenswrapper[7926]: I0216 20:57:09.650526 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:57:09.650606 master-0 kubenswrapper[7926]: I0216 20:57:09.649924 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-var-lib-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.650785 master-0 kubenswrapper[7926]: I0216 20:57:09.650649 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.650785 master-0 kubenswrapper[7926]: I0216 20:57:09.650548 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.650785 master-0 kubenswrapper[7926]: I0216 20:57:09.650712 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:09.650909 master-0 kubenswrapper[7926]: I0216 20:57:09.650797 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab0a907-7abe-4808-ba21-bdda1506eae2-config\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:57:09.650909 master-0 kubenswrapper[7926]: I0216 20:57:09.650828 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6wng\" (UniqueName: \"kubernetes.io/projected/484154d0-66c8-4d0e-bf1b-f48d0abfe628-kube-api-access-b6wng\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:57:09.650909 master-0 kubenswrapper[7926]: I0216 20:57:09.650870 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:57:09.650909 master-0 kubenswrapper[7926]: I0216 20:57:09.650900 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-node-log\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.651026 master-0 kubenswrapper[7926]: I0216 20:57:09.650921 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-client\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:09.651026 master-0 kubenswrapper[7926]: I0216 20:57:09.650948 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-profile-collector-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:09.651026 master-0 kubenswrapper[7926]: I0216 20:57:09.650972 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-bin\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.651026 master-0 kubenswrapper[7926]: I0216 20:57:09.650995 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-netd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.651026 master-0 kubenswrapper[7926]: I0216 20:57:09.651013 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:57:09.651170 master-0 kubenswrapper[7926]: I0216 20:57:09.651037 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7333319-3fe6-4b3f-b600-6b6df49fcaff-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:57:09.651170 master-0 kubenswrapper[7926]: I0216 20:57:09.651060 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1b61063e-775e-421d-bf73-a6ef134293a0-host-etc-kube\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:57:09.651170 master-0 kubenswrapper[7926]: I0216 20:57:09.651078 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:57:09.651170 master-0 kubenswrapper[7926]: I0216 20:57:09.651127 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-config\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.651596 master-0 kubenswrapper[7926]: I0216 20:57:09.651400 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab0a907-7abe-4808-ba21-bdda1506eae2-serving-cert\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:57:09.651596 master-0 kubenswrapper[7926]: I0216 20:57:09.651475 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-config\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.651596 master-0 kubenswrapper[7926]: I0216 20:57:09.651507 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-profile-collector-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:09.651596 master-0 kubenswrapper[7926]: I0216 20:57:09.651527 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:09.651596 master-0 kubenswrapper[7926]: I0216 20:57:09.651564 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-netns\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.651888 master-0 kubenswrapper[7926]: I0216 20:57:09.651595 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:09.651888 master-0 kubenswrapper[7926]: I0216 20:57:09.651863 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-client\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:09.651972 master-0 kubenswrapper[7926]: I0216 20:57:09.651845 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab0a907-7abe-4808-ba21-bdda1506eae2-serving-cert\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:57:09.651972 master-0 kubenswrapper[7926]: I0216 20:57:09.651601 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59237aa6-6250-4619-8ee5-abae59f04b57-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:57:09.651972 master-0 kubenswrapper[7926]: I0216 20:57:09.651951 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7333319-3fe6-4b3f-b600-6b6df49fcaff-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:57:09.651972 master-0 kubenswrapper[7926]: I0216 20:57:09.651953 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.652863 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-trusted-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.652938 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e062e07-8076-444c-b476-4eb2848e9613-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.652967 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sd27\" (UniqueName: \"kubernetes.io/projected/a4c9b781-14c0-469c-bb9e-0c3982a04520-kube-api-access-8sd27\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.652990 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-whereabouts-configmap\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653016 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653056 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d9d71a7a-a751-4de4-9c76-9bac85fe0177-host-slash\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653062 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653165 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653167 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-trusted-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653193 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69785167-b4ae-415b-bdcb-029f62effe78-ovn-node-metrics-cert\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653215 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653238 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d9d71a7a-a751-4de4-9c76-9bac85fe0177-iptables-alerter-script\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653237 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e062e07-8076-444c-b476-4eb2848e9613-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653257 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-whereabouts-configmap\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653259 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653302 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-os-release\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653337 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx4tz\" (UniqueName: \"kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653359 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653382 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653412 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a012b98-9341-41a3-9321-0a099f8bb9da-service-ca\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.653408 master-0 kubenswrapper[7926]: I0216 20:57:09.653424 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653451 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qd6r\" (UniqueName: \"kubernetes.io/projected/2506c282-0b37-4ece-8a0c-885d0b7f7901-kube-api-access-6qd6r\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653481 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-socket-dir-parent\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653502 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4rg\" (UniqueName: \"kubernetes.io/projected/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-kube-api-access-ll4rg\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653524 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9vmp\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-kube-api-access-z9vmp\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653583 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a012b98-9341-41a3-9321-0a099f8bb9da-service-ca\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653590 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653622 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-binary-copy\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653666 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-netns\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653691 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7adbe32-b8b9-438e-a2e3-f93146a97424-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653712 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw9lp\" (UniqueName: \"kubernetes.io/projected/4085413c-9af1-4d2a-ba0f-33b42025cb7f-kube-api-access-dw9lp\") pod \"csi-snapshot-controller-operator-7b87b97578-v7xdv\" (UID: \"4085413c-9af1-4d2a-ba0f-33b42025cb7f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653753 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfmv6\" (UniqueName: \"kubernetes.io/projected/5e062e07-8076-444c-b476-4eb2848e9613-kube-api-access-dfmv6\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653782 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmvtk\" (UniqueName: \"kubernetes.io/projected/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-kube-api-access-zmvtk\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653863 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-binary-copy\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653874 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653903 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7adbe32-b8b9-438e-a2e3-f93146a97424-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653935 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-os-release\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653967 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sq4t\" (UniqueName: \"kubernetes.io/projected/62935559-041f-4694-9d36-adc809d079b4-kube-api-access-6sq4t\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.653994 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-systemd-units\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654021 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cnibin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654060 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654099 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/695549c8-d1fc-429d-9c9f-0a5915dc6074-config\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654129 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-multus-certs\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654166 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654193 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654223 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-env-overrides\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654252 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-ovnkube-identity-cm\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654265 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/695549c8-d1fc-429d-9c9f-0a5915dc6074-config\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654278 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7pk6\" (UniqueName: \"kubernetes.io/projected/1b61063e-775e-421d-bf73-a6ef134293a0-kube-api-access-x7pk6\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654311 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cni-binary-copy\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.654308 master-0 kubenswrapper[7926]: I0216 20:57:09.654338 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654371 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654389 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654398 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654470 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-kubelet\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654503 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrc7l\" (UniqueName: \"kubernetes.io/projected/2e618c5c-52be-4b52-b426-b92555dee9de-kube-api-access-nrc7l\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654530 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654559 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-ovn\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654611 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cni-binary-copy\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654656 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654678 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654720 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67qzh\" (UniqueName: \"kubernetes.io/projected/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-kube-api-access-67qzh\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654746 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkz65\" (UniqueName: \"kubernetes.io/projected/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-kube-api-access-mkz65\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654767 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654790 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59kpw\" (UniqueName: \"kubernetes.io/projected/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-kube-api-access-59kpw\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654885 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-kubelet\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654911 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654933 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e0227bc-63f5-48be-95dc-1323a2b2e327-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.654955 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-env-overrides\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655029 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wrr\" (UniqueName: \"kubernetes.io/projected/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-kube-api-access-p7wrr\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655059 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655108 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655165 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-etc-kubernetes\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655205 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655245 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-cnibin\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655281 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655309 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e0227bc-63f5-48be-95dc-1323a2b2e327-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655318 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655362 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655405 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-env-overrides\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655448 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x85fb\" (UniqueName: \"kubernetes.io/projected/88f19cea-60ed-4977-a906-75deec51fc3d-kube-api-access-x85fb\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655486 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655524 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b02b740-5698-4e9a-90fe-2873bd0b0958-serving-cert\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655582 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-env-overrides\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655671 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-k8s-cni-cncf-io\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655710 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cef33294-81fb-41a2-811d-2565f94514d1-trusted-ca\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:09.655862 master-0 kubenswrapper[7926]: I0216 20:57:09.655849 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b02b740-5698-4e9a-90fe-2873bd0b0958-serving-cert\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:57:09.656970 master-0 kubenswrapper[7926]: I0216 20:57:09.656013 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cef33294-81fb-41a2-811d-2565f94514d1-trusted-ca\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:09.665884 master-0 kubenswrapper[7926]: I0216 20:57:09.665861 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 20:57:09.671328 master-0 kubenswrapper[7926]: I0216 20:57:09.671296 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:57:09.685729 master-0 kubenswrapper[7926]: I0216 20:57:09.685692 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 20:57:09.694975 master-0 kubenswrapper[7926]: I0216 20:57:09.694947 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-ovnkube-identity-cm\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:57:09.705355 master-0 kubenswrapper[7926]: I0216 20:57:09.705309 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 20:57:09.725971 master-0 kubenswrapper[7926]: I0216 20:57:09.725930 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 20:57:09.745769 master-0 kubenswrapper[7926]: I0216 20:57:09.745721 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 20:57:09.754800 master-0 kubenswrapper[7926]: I0216 20:57:09.754705 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d9d71a7a-a751-4de4-9c76-9bac85fe0177-iptables-alerter-script\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:57:09.756820 master-0 kubenswrapper[7926]: I0216 20:57:09.756779 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.756875 master-0 kubenswrapper[7926]: I0216 20:57:09.756827 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.756875 master-0 kubenswrapper[7926]: I0216 20:57:09.756842 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.756875 master-0 kubenswrapper[7926]: I0216 20:57:09.756849 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-etc-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757008 master-0 kubenswrapper[7926]: I0216 20:57:09.756878 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-etc-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757008 master-0 kubenswrapper[7926]: I0216 20:57:09.756894 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.757008 master-0 kubenswrapper[7926]: I0216 20:57:09.756923 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.757008 master-0 kubenswrapper[7926]: I0216 20:57:09.756939 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.757008 master-0 kubenswrapper[7926]: I0216 20:57:09.756973 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-system-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757008 master-0 kubenswrapper[7926]: I0216 20:57:09.757001 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-systemd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757213 master-0 kubenswrapper[7926]: I0216 20:57:09.757035 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757213 master-0 kubenswrapper[7926]: I0216 20:57:09.757082 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-conf-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757213 master-0 kubenswrapper[7926]: I0216 20:57:09.757108 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:09.757213 master-0 kubenswrapper[7926]: I0216 20:57:09.757139 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-system-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757213 master-0 kubenswrapper[7926]: I0216 20:57:09.757140 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:57:09.757213 master-0 kubenswrapper[7926]: I0216 20:57:09.757166 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:57:09.757213 master-0 kubenswrapper[7926]: I0216 20:57:09.757167 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-bin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757213 master-0 kubenswrapper[7926]: I0216 20:57:09.757201 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-systemd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757213 master-0 kubenswrapper[7926]: I0216 20:57:09.757217 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: I0216 20:57:09.757228 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-bin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: I0216 20:57:09.757249 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-conf-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: I0216 20:57:09.757299 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: I0216 20:57:09.757323 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-hostroot\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: E0216 20:57:09.757330 7926 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: I0216 20:57:09.757361 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: E0216 20:57:09.757402 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.257380721 +0000 UTC m=+1.892281151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : secret "multus-admission-controller-secret" not found Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: I0216 20:57:09.757344 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: E0216 20:57:09.757432 7926 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: I0216 20:57:09.757455 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-var-lib-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: E0216 20:57:09.757476 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls podName:9e0227bc-63f5-48be-95dc-1323a2b2e327 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.257459133 +0000 UTC m=+1.892359433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-4gczb" (UID: "9e0227bc-63f5-48be-95dc-1323a2b2e327") : secret "image-registry-operator-tls" not found Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: I0216 20:57:09.757497 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-hostroot\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: I0216 20:57:09.757434 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-var-lib-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757513 master-0 kubenswrapper[7926]: I0216 20:57:09.757524 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757542 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757558 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757575 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-node-log\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757591 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757614 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-bin\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757629 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-netd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757653 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757691 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757695 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1b61063e-775e-421d-bf73-a6ef134293a0-host-etc-kube\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757720 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1b61063e-775e-421d-bf73-a6ef134293a0-host-etc-kube\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757727 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757751 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-netns\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: E0216 20:57:09.757772 7926 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757775 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757798 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d9d71a7a-a751-4de4-9c76-9bac85fe0177-host-slash\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757807 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-bin\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757833 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757836 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757872 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-os-release\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757901 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-socket-dir-parent\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757907 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-netns\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757934 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: I0216 20:57:09.757954 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-socket-dir-parent\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.757987 master-0 kubenswrapper[7926]: E0216 20:57:09.758012 7926 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:09.758861 master-0 kubenswrapper[7926]: E0216 20:57:09.758041 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls podName:456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.258030028 +0000 UTC m=+1.892930438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls") pod "dns-operator-86b8869b79-cdltb" (UID: "456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd") : secret "metrics-tls" not found Feb 16 20:57:09.758861 master-0 kubenswrapper[7926]: I0216 20:57:09.758068 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d9d71a7a-a751-4de4-9c76-9bac85fe0177-host-slash\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:57:09.758861 master-0 kubenswrapper[7926]: E0216 20:57:09.758089 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls podName:ec7dd4ea-a139-45d4-96a4-506da1567292 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.258081349 +0000 UTC m=+1.892981649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-w57zn" (UID: "ec7dd4ea-a139-45d4-96a4-506da1567292") : secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:09.758861 master-0 kubenswrapper[7926]: I0216 20:57:09.758103 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-netd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.758861 master-0 kubenswrapper[7926]: I0216 20:57:09.758133 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.758861 master-0 kubenswrapper[7926]: I0216 20:57:09.758311 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-os-release\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.758861 master-0 kubenswrapper[7926]: E0216 20:57:09.758357 7926 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 20:57:09.758861 master-0 kubenswrapper[7926]: E0216 20:57:09.758379 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.258372667 +0000 UTC m=+1.893273097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "node-tuning-operator-tls" not found Feb 16 20:57:09.758861 master-0 kubenswrapper[7926]: I0216 20:57:09.758403 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.758861 master-0 kubenswrapper[7926]: I0216 20:57:09.758430 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-node-log\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.758861 master-0 kubenswrapper[7926]: I0216 20:57:09.758561 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.759494 master-0 kubenswrapper[7926]: I0216 20:57:09.759461 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.759539 master-0 kubenswrapper[7926]: I0216 20:57:09.759501 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.759539 master-0 kubenswrapper[7926]: I0216 20:57:09.759519 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.759616 master-0 kubenswrapper[7926]: I0216 20:57:09.759562 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-netns\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.759616 master-0 kubenswrapper[7926]: I0216 20:57:09.759596 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-os-release\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.759697 master-0 kubenswrapper[7926]: I0216 20:57:09.759625 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.759697 master-0 kubenswrapper[7926]: I0216 20:57:09.759628 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-netns\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.759697 master-0 kubenswrapper[7926]: I0216 20:57:09.759648 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.759697 master-0 kubenswrapper[7926]: I0216 20:57:09.759683 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.759697 master-0 kubenswrapper[7926]: I0216 20:57:09.759694 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cnibin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.759864 master-0 kubenswrapper[7926]: I0216 20:57:09.759720 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-systemd-units\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.759864 master-0 kubenswrapper[7926]: I0216 20:57:09.759734 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-os-release\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.759864 master-0 kubenswrapper[7926]: I0216 20:57:09.759740 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-multus-certs\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.759864 master-0 kubenswrapper[7926]: I0216 20:57:09.759764 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:09.759864 master-0 kubenswrapper[7926]: I0216 20:57:09.759765 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-systemd-units\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.759864 master-0 kubenswrapper[7926]: I0216 20:57:09.759574 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.760065 master-0 kubenswrapper[7926]: I0216 20:57:09.759963 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cnibin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.760065 master-0 kubenswrapper[7926]: I0216 20:57:09.760013 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:09.760142 master-0 kubenswrapper[7926]: I0216 20:57:09.760078 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-multus-certs\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.760142 master-0 kubenswrapper[7926]: I0216 20:57:09.760123 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:09.760212 master-0 kubenswrapper[7926]: I0216 20:57:09.760149 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:57:09.760212 master-0 kubenswrapper[7926]: I0216 20:57:09.760194 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:57:09.760292 master-0 kubenswrapper[7926]: E0216 20:57:09.760213 7926 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 20:57:09.760292 master-0 kubenswrapper[7926]: E0216 20:57:09.760234 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 20:57:09.760292 master-0 kubenswrapper[7926]: I0216 20:57:09.760257 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:57:09.760292 master-0 kubenswrapper[7926]: I0216 20:57:09.760218 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: E0216 20:57:09.760313 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics podName:b28234d1-1d9a-4d9f-9ad1-e3c682bed492 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.260276995 +0000 UTC m=+1.895177315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-6rmhq" (UID: "b28234d1-1d9a-4d9f-9ad1-e3c682bed492") : secret "marketplace-operator-metrics" not found Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760329 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: E0216 20:57:09.760355 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert podName:4b035e85-b2b0-4dee-bb86-3465fc4b98a8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.260330827 +0000 UTC m=+1.895231147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-9m94g" (UID: "4b035e85-b2b0-4dee-bb86-3465fc4b98a8") : secret "package-server-manager-serving-cert" not found Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760390 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-ovn\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760418 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760457 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760478 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-kubelet\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760517 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-ovn\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: E0216 20:57:09.760542 7926 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: E0216 20:57:09.760573 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls podName:cef33294-81fb-41a2-811d-2565f94514d1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.260563633 +0000 UTC m=+1.895464063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls") pod "ingress-operator-c588d8cb4-6ps2d" (UID: "cef33294-81fb-41a2-811d-2565f94514d1") : secret "metrics-tls" not found Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760585 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-kubelet\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760595 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760616 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760618 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-kubelet\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760639 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-kubelet\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760668 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-cnibin\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760688 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760699 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760706 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-etc-kubernetes\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760733 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-cnibin\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760736 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760766 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:57:09.760744 master-0 kubenswrapper[7926]: I0216 20:57:09.760812 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.760866 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: E0216 20:57:09.760874 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.760903 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.760866 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.760884 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-etc-kubernetes\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: E0216 20:57:09.760933 7926 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: E0216 20:57:09.760942 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert podName:2e618c5c-52be-4b52-b426-b92555dee9de nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.260919442 +0000 UTC m=+1.895819752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert") pod "catalog-operator-588944557d-h7xl6" (UID: "2e618c5c-52be-4b52-b426-b92555dee9de") : secret "catalog-operator-serving-cert" not found Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761013 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761036 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-k8s-cni-cncf-io\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761057 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-multus\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: E0216 20:57:09.761097 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.261079196 +0000 UTC m=+1.895979496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761095 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-multus\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761098 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-k8s-cni-cncf-io\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: E0216 20:57:09.761141 7926 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: E0216 20:57:09.761168 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.261159608 +0000 UTC m=+1.896059908 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761165 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761196 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761194 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761218 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-system-cni-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761265 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761299 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761326 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-system-cni-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761329 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: E0216 20:57:09.761397 7926 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761402 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-slash\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761431 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-log-socket\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761464 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-log-socket\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: I0216 20:57:09.761489 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-slash\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: E0216 20:57:09.761520 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.261510167 +0000 UTC m=+1.896410697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : secret "metrics-daemon-secret" not found Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: E0216 20:57:09.761529 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 20:57:09.761650 master-0 kubenswrapper[7926]: E0216 20:57:09.761562 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert podName:a4c9b781-14c0-469c-bb9e-0c3982a04520 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:10.261549578 +0000 UTC m=+1.896450128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert") pod "olm-operator-6b56bd877c-vlhvq" (UID: "a4c9b781-14c0-469c-bb9e-0c3982a04520") : secret "olm-operator-serving-cert" not found Feb 16 20:57:09.765595 master-0 kubenswrapper[7926]: I0216 20:57:09.765564 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 20:57:09.774210 master-0 kubenswrapper[7926]: I0216 20:57:09.774169 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69785167-b4ae-415b-bdcb-029f62effe78-ovn-node-metrics-cert\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.784791 master-0 kubenswrapper[7926]: I0216 20:57:09.784725 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 20:57:09.790078 master-0 kubenswrapper[7926]: I0216 20:57:09.790036 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-script-lib\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.853296 master-0 kubenswrapper[7926]: W0216 20:57:09.853122 7926 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 16 20:57:09.854075 master-0 kubenswrapper[7926]: I0216 20:57:09.854040 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjsnz\" (UniqueName: \"kubernetes.io/projected/27c20f63-9bfb-4703-94d5-0c65475e08d1-kube-api-access-hjsnz\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 20:57:09.854219 master-0 kubenswrapper[7926]: E0216 20:57:09.854191 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:57:09.856233 master-0 kubenswrapper[7926]: I0216 20:57:09.856201 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqm46\" (UniqueName: \"kubernetes.io/projected/69785167-b4ae-415b-bdcb-029f62effe78-kube-api-access-dqm46\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:09.858905 master-0 kubenswrapper[7926]: E0216 20:57:09.858851 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 20:57:09.858905 master-0 kubenswrapper[7926]: E0216 20:57:09.858856 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:09.858986 master-0 kubenswrapper[7926]: E0216 20:57:09.858944 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:09.859227 master-0 kubenswrapper[7926]: E0216 20:57:09.859195 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 20:57:09.873017 master-0 kubenswrapper[7926]: I0216 20:57:09.872915 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jt7h\" (UniqueName: \"kubernetes.io/projected/ec7dd4ea-a139-45d4-96a4-506da1567292-kube-api-access-9jt7h\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:09.882234 master-0 kubenswrapper[7926]: I0216 20:57:09.882184 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vklwz\" (UniqueName: \"kubernetes.io/projected/59237aa6-6250-4619-8ee5-abae59f04b57-kube-api-access-vklwz\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:57:09.900236 master-0 kubenswrapper[7926]: I0216 20:57:09.900188 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7nmb\" (UniqueName: \"kubernetes.io/projected/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-kube-api-access-g7nmb\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:09.918815 master-0 kubenswrapper[7926]: I0216 20:57:09.917879 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a012b98-9341-41a3-9321-0a099f8bb9da-kube-api-access\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:09.936067 master-0 kubenswrapper[7926]: I0216 20:57:09.936010 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkdzb\" (UniqueName: \"kubernetes.io/projected/d9d71a7a-a751-4de4-9c76-9bac85fe0177-kube-api-access-jkdzb\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 20:57:09.956412 master-0 kubenswrapper[7926]: I0216 20:57:09.956375 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx2kd\" (UniqueName: \"kubernetes.io/projected/c7333319-3fe6-4b3f-b600-6b6df49fcaff-kube-api-access-qx2kd\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 20:57:09.981676 master-0 kubenswrapper[7926]: I0216 20:57:09.981596 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7adbe32-b8b9-438e-a2e3-f93146a97424-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 20:57:09.995419 master-0 kubenswrapper[7926]: I0216 20:57:09.995377 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-bound-sa-token\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:10.015732 master-0 kubenswrapper[7926]: I0216 20:57:10.015683 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bcmr\" (UniqueName: \"kubernetes.io/projected/695549c8-d1fc-429d-9c9f-0a5915dc6074-kube-api-access-7bcmr\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 20:57:10.043548 master-0 kubenswrapper[7926]: I0216 20:57:10.043480 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tklr\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-kube-api-access-5tklr\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:10.064524 master-0 kubenswrapper[7926]: I0216 20:57:10.064457 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pw88\" (UniqueName: \"kubernetes.io/projected/2ab0a907-7abe-4808-ba21-bdda1506eae2-kube-api-access-9pw88\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 20:57:10.089949 master-0 kubenswrapper[7926]: I0216 20:57:10.089904 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b02b740-5698-4e9a-90fe-2873bd0b0958-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 20:57:10.094791 master-0 kubenswrapper[7926]: I0216 20:57:10.094762 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 20:57:10.115456 master-0 kubenswrapper[7926]: I0216 20:57:10.115359 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6wng\" (UniqueName: \"kubernetes.io/projected/484154d0-66c8-4d0e-bf1b-f48d0abfe628-kube-api-access-b6wng\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 20:57:10.139549 master-0 kubenswrapper[7926]: I0216 20:57:10.139505 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sd27\" (UniqueName: \"kubernetes.io/projected/a4c9b781-14c0-469c-bb9e-0c3982a04520-kube-api-access-8sd27\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: I0216 20:57:10.267363 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: I0216 20:57:10.267804 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: I0216 20:57:10.267860 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: E0216 20:57:10.267597 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: I0216 20:57:10.267967 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: E0216 20:57:10.267972 7926 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: I0216 20:57:10.268016 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: E0216 20:57:10.268053 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics podName:b28234d1-1d9a-4d9f-9ad1-e3c682bed492 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.268029474 +0000 UTC m=+2.902929774 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-6rmhq" (UID: "b28234d1-1d9a-4d9f-9ad1-e3c682bed492") : secret "marketplace-operator-metrics" not found Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: E0216 20:57:10.268085 7926 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: I0216 20:57:10.268090 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: E0216 20:57:10.268108 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert podName:2e618c5c-52be-4b52-b426-b92555dee9de nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.268083295 +0000 UTC m=+2.902983635 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert") pod "catalog-operator-588944557d-h7xl6" (UID: "2e618c5c-52be-4b52-b426-b92555dee9de") : secret "catalog-operator-serving-cert" not found Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: E0216 20:57:10.268153 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: E0216 20:57:10.268163 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.268150197 +0000 UTC m=+2.903050547 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: E0216 20:57:10.268180 7926 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: E0216 20:57:10.268185 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert podName:4b035e85-b2b0-4dee-bb86-3465fc4b98a8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.268175087 +0000 UTC m=+2.903075457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-9m94g" (UID: "4b035e85-b2b0-4dee-bb86-3465fc4b98a8") : secret "package-server-manager-serving-cert" not found Feb 16 20:57:10.268207 master-0 kubenswrapper[7926]: I0216 20:57:10.268224 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: E0216 20:57:10.268237 7926 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: I0216 20:57:10.268264 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: E0216 20:57:10.268273 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.268263319 +0000 UTC m=+2.903163679 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: E0216 20:57:10.268304 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls podName:cef33294-81fb-41a2-811d-2565f94514d1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.2682936 +0000 UTC m=+2.903193990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls") pod "ingress-operator-c588d8cb4-6ps2d" (UID: "cef33294-81fb-41a2-811d-2565f94514d1") : secret "metrics-tls" not found Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: I0216 20:57:10.268420 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: E0216 20:57:10.268428 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: E0216 20:57:10.268518 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert podName:a4c9b781-14c0-469c-bb9e-0c3982a04520 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.268497795 +0000 UTC m=+2.903398095 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert") pod "olm-operator-6b56bd877c-vlhvq" (UID: "a4c9b781-14c0-469c-bb9e-0c3982a04520") : secret "olm-operator-serving-cert" not found Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: E0216 20:57:10.268592 7926 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: E0216 20:57:10.268657 7926 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: I0216 20:57:10.268675 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: E0216 20:57:10.268687 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.268679711 +0000 UTC m=+2.903580011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : secret "multus-admission-controller-secret" not found Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: E0216 20:57:10.268746 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.268735212 +0000 UTC m=+2.903635592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : secret "metrics-daemon-secret" not found Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: I0216 20:57:10.268765 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: E0216 20:57:10.268793 7926 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 20:57:10.268860 master-0 kubenswrapper[7926]: I0216 20:57:10.268802 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:10.269339 master-0 kubenswrapper[7926]: E0216 20:57:10.268878 7926 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 20:57:10.269339 master-0 kubenswrapper[7926]: E0216 20:57:10.268937 7926 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:10.269339 master-0 kubenswrapper[7926]: E0216 20:57:10.268990 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls podName:9e0227bc-63f5-48be-95dc-1323a2b2e327 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.268826674 +0000 UTC m=+2.903727004 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-4gczb" (UID: "9e0227bc-63f5-48be-95dc-1323a2b2e327") : secret "image-registry-operator-tls" not found Feb 16 20:57:10.269339 master-0 kubenswrapper[7926]: I0216 20:57:10.269040 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:10.269339 master-0 kubenswrapper[7926]: E0216 20:57:10.269146 7926 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:10.269339 master-0 kubenswrapper[7926]: E0216 20:57:10.269182 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls podName:456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.269171803 +0000 UTC m=+2.904072103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls") pod "dns-operator-86b8869b79-cdltb" (UID: "456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd") : secret "metrics-tls" not found Feb 16 20:57:10.269339 master-0 kubenswrapper[7926]: E0216 20:57:10.269255 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls podName:ec7dd4ea-a139-45d4-96a4-506da1567292 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.269245775 +0000 UTC m=+2.904146205 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-w57zn" (UID: "ec7dd4ea-a139-45d4-96a4-506da1567292") : secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:10.269339 master-0 kubenswrapper[7926]: E0216 20:57:10.269273 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:11.269266275 +0000 UTC m=+2.904166675 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "node-tuning-operator-tls" not found Feb 16 20:57:10.370430 master-0 kubenswrapper[7926]: I0216 20:57:10.370322 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:10.451052 master-0 kubenswrapper[7926]: I0216 20:57:10.451008 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:10.508322 master-0 kubenswrapper[7926]: I0216 20:57:10.508269 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:10.530860 master-0 kubenswrapper[7926]: I0216 20:57:10.530815 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:10.667002 master-0 kubenswrapper[7926]: I0216 20:57:10.666893 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx4tz\" (UniqueName: \"kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:10.672781 master-0 kubenswrapper[7926]: I0216 20:57:10.672753 7926 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 20:57:10.674305 master-0 kubenswrapper[7926]: I0216 20:57:10.674272 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9vmp\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-kube-api-access-z9vmp\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:10.675894 master-0 kubenswrapper[7926]: I0216 20:57:10.675791 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmvtk\" (UniqueName: \"kubernetes.io/projected/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-kube-api-access-zmvtk\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 20:57:10.680815 master-0 kubenswrapper[7926]: I0216 20:57:10.680367 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:57:10.680815 master-0 kubenswrapper[7926]: I0216 20:57:10.680463 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59kpw\" (UniqueName: \"kubernetes.io/projected/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-kube-api-access-59kpw\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:10.680815 master-0 kubenswrapper[7926]: I0216 20:57:10.680534 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sq4t\" (UniqueName: \"kubernetes.io/projected/62935559-041f-4694-9d36-adc809d079b4-kube-api-access-6sq4t\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 20:57:10.680815 master-0 kubenswrapper[7926]: I0216 20:57:10.680785 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:10.681584 master-0 kubenswrapper[7926]: I0216 20:57:10.681553 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw9lp\" (UniqueName: \"kubernetes.io/projected/4085413c-9af1-4d2a-ba0f-33b42025cb7f-kube-api-access-dw9lp\") pod \"csi-snapshot-controller-operator-7b87b97578-v7xdv\" (UID: \"4085413c-9af1-4d2a-ba0f-33b42025cb7f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" Feb 16 20:57:10.682024 master-0 kubenswrapper[7926]: I0216 20:57:10.681992 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkz65\" (UniqueName: \"kubernetes.io/projected/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-kube-api-access-mkz65\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 20:57:10.682504 master-0 kubenswrapper[7926]: I0216 20:57:10.682480 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wrr\" (UniqueName: \"kubernetes.io/projected/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-kube-api-access-p7wrr\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:10.683297 master-0 kubenswrapper[7926]: I0216 20:57:10.683270 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrc7l\" (UniqueName: \"kubernetes.io/projected/2e618c5c-52be-4b52-b426-b92555dee9de-kube-api-access-nrc7l\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:10.772715 master-0 kubenswrapper[7926]: I0216 20:57:10.767724 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:57:10.797705 master-0 kubenswrapper[7926]: I0216 20:57:10.792341 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7pk6\" (UniqueName: \"kubernetes.io/projected/1b61063e-775e-421d-bf73-a6ef134293a0-kube-api-access-x7pk6\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 20:57:10.804350 master-0 kubenswrapper[7926]: I0216 20:57:10.804299 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerStarted","Data":"8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669"} Feb 16 20:57:10.805932 master-0 kubenswrapper[7926]: I0216 20:57:10.805901 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerStarted","Data":"df4705117bc30301536972bb1ddb323a9cf1860379e92028207e9c158a991276"} Feb 16 20:57:10.814027 master-0 kubenswrapper[7926]: I0216 20:57:10.813949 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" event={"ID":"e7adbe32-b8b9-438e-a2e3-f93146a97424","Type":"ContainerStarted","Data":"34f0b2189e90cc7801c4026c4ab900cc1fc9f5ac2f006e83f5fec81671df191f"} Feb 16 20:57:10.818882 master-0 kubenswrapper[7926]: I0216 20:57:10.818822 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerStarted","Data":"f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112"} Feb 16 20:57:10.827984 master-0 kubenswrapper[7926]: I0216 20:57:10.826144 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerStarted","Data":"58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372"} Feb 16 20:57:10.829335 master-0 kubenswrapper[7926]: I0216 20:57:10.829174 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerStarted","Data":"a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c"} Feb 16 20:57:10.832088 master-0 kubenswrapper[7926]: I0216 20:57:10.832042 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" event={"ID":"2ab0a907-7abe-4808-ba21-bdda1506eae2","Type":"ContainerStarted","Data":"0e76905998b63e1ca06bb636f257a337f36ba01b7d03a406ab7d6fa3bdb3b545"} Feb 16 20:57:11.095378 master-0 kubenswrapper[7926]: I0216 20:57:11.095065 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4rg\" (UniqueName: \"kubernetes.io/projected/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-kube-api-access-ll4rg\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 20:57:11.108503 master-0 kubenswrapper[7926]: I0216 20:57:11.101695 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67qzh\" (UniqueName: \"kubernetes.io/projected/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-kube-api-access-67qzh\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:11.109272 master-0 kubenswrapper[7926]: I0216 20:57:11.109199 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x85fb\" (UniqueName: \"kubernetes.io/projected/88f19cea-60ed-4977-a906-75deec51fc3d-kube-api-access-x85fb\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 20:57:11.111243 master-0 kubenswrapper[7926]: I0216 20:57:11.111208 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qd6r\" (UniqueName: \"kubernetes.io/projected/2506c282-0b37-4ece-8a0c-885d0b7f7901-kube-api-access-6qd6r\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:11.116900 master-0 kubenswrapper[7926]: I0216 20:57:11.112610 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfmv6\" (UniqueName: \"kubernetes.io/projected/5e062e07-8076-444c-b476-4eb2848e9613-kube-api-access-dfmv6\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 20:57:11.116900 master-0 kubenswrapper[7926]: I0216 20:57:11.116271 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-68c25"] Feb 16 20:57:11.285109 master-0 kubenswrapper[7926]: I0216 20:57:11.285039 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285128 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285238 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285277 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285310 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285347 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert podName:a4c9b781-14c0-469c-bb9e-0c3982a04520 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.285322056 +0000 UTC m=+4.920222446 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert") pod "olm-operator-6b56bd877c-vlhvq" (UID: "a4c9b781-14c0-469c-bb9e-0c3982a04520") : secret "olm-operator-serving-cert" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285374 7926 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285409 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285433 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls podName:9e0227bc-63f5-48be-95dc-1323a2b2e327 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.285415188 +0000 UTC m=+4.920315578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-4gczb" (UID: "9e0227bc-63f5-48be-95dc-1323a2b2e327") : secret "image-registry-operator-tls" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285482 7926 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285488 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285504 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.28549634 +0000 UTC m=+4.920396770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : secret "multus-admission-controller-secret" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285246 7926 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285543 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.285536201 +0000 UTC m=+4.920436651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : secret "metrics-daemon-secret" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285565 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285590 7926 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285665 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.285633223 +0000 UTC m=+4.920533613 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "node-tuning-operator-tls" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285596 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285676 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285710 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285736 7926 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285765 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls podName:456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.285756137 +0000 UTC m=+4.920656527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls") pod "dns-operator-86b8869b79-cdltb" (UID: "456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd") : secret "metrics-tls" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285776 7926 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285804 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert podName:2e618c5c-52be-4b52-b426-b92555dee9de nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.285793927 +0000 UTC m=+4.920694327 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert") pod "catalog-operator-588944557d-h7xl6" (UID: "2e618c5c-52be-4b52-b426-b92555dee9de") : secret "catalog-operator-serving-cert" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285823 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls podName:ec7dd4ea-a139-45d4-96a4-506da1567292 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.285814128 +0000 UTC m=+4.920714568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-w57zn" (UID: "ec7dd4ea-a139-45d4-96a4-506da1567292") : secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285785 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285857 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285826 7926 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285890 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics podName:b28234d1-1d9a-4d9f-9ad1-e3c682bed492 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.28588089 +0000 UTC m=+4.920781190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-6rmhq" (UID: "b28234d1-1d9a-4d9f-9ad1-e3c682bed492") : secret "marketplace-operator-metrics" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285885 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285857 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285924 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert podName:4b035e85-b2b0-4dee-bb86-3465fc4b98a8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.285918771 +0000 UTC m=+4.920819071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-9m94g" (UID: "4b035e85-b2b0-4dee-bb86-3465fc4b98a8") : secret "package-server-manager-serving-cert" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285932 7926 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285961 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.285952721 +0000 UTC m=+4.920853131 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285965 7926 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.285985 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls podName:cef33294-81fb-41a2-811d-2565f94514d1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.285979942 +0000 UTC m=+4.920880242 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls") pod "ingress-operator-c588d8cb4-6ps2d" (UID: "cef33294-81fb-41a2-811d-2565f94514d1") : secret "metrics-tls" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: I0216 20:57:11.285984 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.286018 7926 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:11.286222 master-0 kubenswrapper[7926]: E0216 20:57:11.286039 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:13.286033944 +0000 UTC m=+4.920934244 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:11.297221 master-0 kubenswrapper[7926]: I0216 20:57:11.297112 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:11.819293 master-0 kubenswrapper[7926]: I0216 20:57:11.818880 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d"] Feb 16 20:57:11.820600 master-0 kubenswrapper[7926]: E0216 20:57:11.819421 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc1d7efb-93cd-4f49-ace0-2144532cae9e" containerName="prober" Feb 16 20:57:11.820600 master-0 kubenswrapper[7926]: I0216 20:57:11.819435 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc1d7efb-93cd-4f49-ace0-2144532cae9e" containerName="prober" Feb 16 20:57:11.820600 master-0 kubenswrapper[7926]: E0216 20:57:11.819447 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerName="assisted-installer-controller" Feb 16 20:57:11.820600 master-0 kubenswrapper[7926]: I0216 20:57:11.819455 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerName="assisted-installer-controller" Feb 16 20:57:11.820600 master-0 kubenswrapper[7926]: I0216 20:57:11.819530 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerName="assisted-installer-controller" Feb 16 20:57:11.820600 master-0 kubenswrapper[7926]: I0216 20:57:11.819541 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc1d7efb-93cd-4f49-ace0-2144532cae9e" containerName="prober" Feb 16 20:57:11.820600 master-0 kubenswrapper[7926]: I0216 20:57:11.819998 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" Feb 16 20:57:11.823559 master-0 kubenswrapper[7926]: I0216 20:57:11.823526 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 20:57:11.825956 master-0 kubenswrapper[7926]: I0216 20:57:11.825919 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 20:57:11.832233 master-0 kubenswrapper[7926]: I0216 20:57:11.832193 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d"] Feb 16 20:57:11.837435 master-0 kubenswrapper[7926]: I0216 20:57:11.837377 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" event={"ID":"70d217a9-86b7-47b9-a7da-9ac920b9c7c2","Type":"ContainerStarted","Data":"e960726eec7f4c030bcd77b5c00f9a27240da71756776e4b20d66b6c394494f7"} Feb 16 20:57:11.839572 master-0 kubenswrapper[7926]: I0216 20:57:11.839127 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-68c25" event={"ID":"0d903d23-8e0b-424b-bcd0-e0a00f306e49","Type":"ContainerStarted","Data":"63057ac92dec2fa9c7d10c67e0bccd3d3eb946a1626faaf9fb6f3de715241845"} Feb 16 20:57:11.839572 master-0 kubenswrapper[7926]: I0216 20:57:11.839157 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-68c25" event={"ID":"0d903d23-8e0b-424b-bcd0-e0a00f306e49","Type":"ContainerStarted","Data":"dbf32b84ea4131f980c7517f9adf09ab0debbea21b7d7312f8107de5103e23bd"} Feb 16 20:57:11.841137 master-0 kubenswrapper[7926]: I0216 20:57:11.841087 7926 generic.go:334] "Generic (PLEG): container finished" podID="5e062e07-8076-444c-b476-4eb2848e9613" containerID="30eb3e8a1a561e4df2b728e0e98a6145e2dd7a64784f0071e688e9e9f5cc6bbc" exitCode=0 Feb 16 20:57:11.841214 master-0 kubenswrapper[7926]: I0216 20:57:11.841196 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerDied","Data":"30eb3e8a1a561e4df2b728e0e98a6145e2dd7a64784f0071e688e9e9f5cc6bbc"} Feb 16 20:57:11.843537 master-0 kubenswrapper[7926]: I0216 20:57:11.843142 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" event={"ID":"6b6be6de-6fcc-4f57-b163-fe8f970a01a4","Type":"ContainerStarted","Data":"75d7b146641140c312956826b413c80f7862cac93292ebbdd2b6b13f8e1b06a3"} Feb 16 20:57:11.843537 master-0 kubenswrapper[7926]: I0216 20:57:11.843265 7926 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 20:57:11.848559 master-0 kubenswrapper[7926]: I0216 20:57:11.848516 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" event={"ID":"4085413c-9af1-4d2a-ba0f-33b42025cb7f","Type":"ContainerStarted","Data":"ada24a94e3cdaddc38a62024529752b29e1359c42e86c75ebaa514d784cc3fe9"} Feb 16 20:57:11.849451 master-0 kubenswrapper[7926]: I0216 20:57:11.849429 7926 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:57:11.900850 master-0 kubenswrapper[7926]: I0216 20:57:11.897300 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9w8k\" (UniqueName: \"kubernetes.io/projected/684a8167-6c5b-430f-979e-307e58487611-kube-api-access-s9w8k\") pod \"migrator-5bd989df77-kdb9d\" (UID: \"684a8167-6c5b-430f-979e-307e58487611\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" Feb 16 20:57:11.999548 master-0 kubenswrapper[7926]: I0216 20:57:11.999044 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9w8k\" (UniqueName: \"kubernetes.io/projected/684a8167-6c5b-430f-979e-307e58487611-kube-api-access-s9w8k\") pod \"migrator-5bd989df77-kdb9d\" (UID: \"684a8167-6c5b-430f-979e-307e58487611\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" Feb 16 20:57:12.025734 master-0 kubenswrapper[7926]: I0216 20:57:12.022255 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9w8k\" (UniqueName: \"kubernetes.io/projected/684a8167-6c5b-430f-979e-307e58487611-kube-api-access-s9w8k\") pod \"migrator-5bd989df77-kdb9d\" (UID: \"684a8167-6c5b-430f-979e-307e58487611\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" Feb 16 20:57:12.104739 master-0 kubenswrapper[7926]: I0216 20:57:12.097794 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:57:12.138231 master-0 kubenswrapper[7926]: I0216 20:57:12.137387 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" Feb 16 20:57:12.370627 master-0 kubenswrapper[7926]: I0216 20:57:12.370182 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d"] Feb 16 20:57:12.379088 master-0 kubenswrapper[7926]: W0216 20:57:12.379027 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684a8167_6c5b_430f_979e_307e58487611.slice/crio-f94d68e1b5a31fd6ac38d04b76b6e3ee908e79aa67afc23e7d2bf54001deb6f0 WatchSource:0}: Error finding container f94d68e1b5a31fd6ac38d04b76b6e3ee908e79aa67afc23e7d2bf54001deb6f0: Status 404 returned error can't find the container with id f94d68e1b5a31fd6ac38d04b76b6e3ee908e79aa67afc23e7d2bf54001deb6f0 Feb 16 20:57:12.617173 master-0 kubenswrapper[7926]: I0216 20:57:12.617071 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9"] Feb 16 20:57:12.618175 master-0 kubenswrapper[7926]: I0216 20:57:12.617841 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" Feb 16 20:57:12.631726 master-0 kubenswrapper[7926]: I0216 20:57:12.628101 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9"] Feb 16 20:57:12.706119 master-0 kubenswrapper[7926]: I0216 20:57:12.706062 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzx4s\" (UniqueName: \"kubernetes.io/projected/b1ac9776-54c4-46ce-b898-01c8cf35e593-kube-api-access-vzx4s\") pod \"csi-snapshot-controller-74b6595c6d-pc6x9\" (UID: \"b1ac9776-54c4-46ce-b898-01c8cf35e593\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" Feb 16 20:57:12.807926 master-0 kubenswrapper[7926]: I0216 20:57:12.807879 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzx4s\" (UniqueName: \"kubernetes.io/projected/b1ac9776-54c4-46ce-b898-01c8cf35e593-kube-api-access-vzx4s\") pod \"csi-snapshot-controller-74b6595c6d-pc6x9\" (UID: \"b1ac9776-54c4-46ce-b898-01c8cf35e593\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" Feb 16 20:57:12.836547 master-0 kubenswrapper[7926]: I0216 20:57:12.836501 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzx4s\" (UniqueName: \"kubernetes.io/projected/b1ac9776-54c4-46ce-b898-01c8cf35e593-kube-api-access-vzx4s\") pod \"csi-snapshot-controller-74b6595c6d-pc6x9\" (UID: \"b1ac9776-54c4-46ce-b898-01c8cf35e593\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" Feb 16 20:57:12.860003 master-0 kubenswrapper[7926]: I0216 20:57:12.859950 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" event={"ID":"684a8167-6c5b-430f-979e-307e58487611","Type":"ContainerStarted","Data":"f94d68e1b5a31fd6ac38d04b76b6e3ee908e79aa67afc23e7d2bf54001deb6f0"} Feb 16 20:57:12.860293 master-0 kubenswrapper[7926]: I0216 20:57:12.860274 7926 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:57:13.005293 master-0 kubenswrapper[7926]: I0216 20:57:13.005235 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" Feb 16 20:57:13.178266 master-0 kubenswrapper[7926]: I0216 20:57:13.177935 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9"] Feb 16 20:57:13.193089 master-0 kubenswrapper[7926]: W0216 20:57:13.192859 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1ac9776_54c4_46ce_b898_01c8cf35e593.slice/crio-d3647391d6c6aea748cff19ab3829b4c4308cc4ee2ef9a5eb37149acfef03e2f WatchSource:0}: Error finding container d3647391d6c6aea748cff19ab3829b4c4308cc4ee2ef9a5eb37149acfef03e2f: Status 404 returned error can't find the container with id d3647391d6c6aea748cff19ab3829b4c4308cc4ee2ef9a5eb37149acfef03e2f Feb 16 20:57:13.318189 master-0 kubenswrapper[7926]: I0216 20:57:13.318035 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:13.318189 master-0 kubenswrapper[7926]: I0216 20:57:13.318093 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:13.318189 master-0 kubenswrapper[7926]: I0216 20:57:13.318131 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:13.318548 master-0 kubenswrapper[7926]: I0216 20:57:13.318250 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:13.318548 master-0 kubenswrapper[7926]: E0216 20:57:13.318337 7926 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 20:57:13.318548 master-0 kubenswrapper[7926]: E0216 20:57:13.318426 7926 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 20:57:13.318548 master-0 kubenswrapper[7926]: E0216 20:57:13.318466 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 20:57:13.318827 master-0 kubenswrapper[7926]: E0216 20:57:13.318571 7926 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 20:57:13.318827 master-0 kubenswrapper[7926]: E0216 20:57:13.318433 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.318409141 +0000 UTC m=+8.953309441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : secret "metrics-daemon-secret" not found Feb 16 20:57:13.318827 master-0 kubenswrapper[7926]: E0216 20:57:13.318619 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.318596845 +0000 UTC m=+8.953497145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : secret "multus-admission-controller-secret" not found Feb 16 20:57:13.318827 master-0 kubenswrapper[7926]: I0216 20:57:13.318731 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:13.318827 master-0 kubenswrapper[7926]: E0216 20:57:13.318818 7926 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.318846 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert podName:a4c9b781-14c0-469c-bb9e-0c3982a04520 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.318693358 +0000 UTC m=+8.953593658 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert") pod "olm-operator-6b56bd877c-vlhvq" (UID: "a4c9b781-14c0-469c-bb9e-0c3982a04520") : secret "olm-operator-serving-cert" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: I0216 20:57:13.319081 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319121 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls podName:9e0227bc-63f5-48be-95dc-1323a2b2e327 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.319109658 +0000 UTC m=+8.954009958 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-4gczb" (UID: "9e0227bc-63f5-48be-95dc-1323a2b2e327") : secret "image-registry-operator-tls" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319148 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls podName:ec7dd4ea-a139-45d4-96a4-506da1567292 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.319139509 +0000 UTC m=+8.954039809 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-w57zn" (UID: "ec7dd4ea-a139-45d4-96a4-506da1567292") : secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: I0216 20:57:13.319181 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319238 7926 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319292 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319325 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.319306333 +0000 UTC m=+8.954206633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "node-tuning-operator-tls" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319344 7926 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: I0216 20:57:13.319251 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319344 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert podName:2e618c5c-52be-4b52-b426-b92555dee9de nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.319337664 +0000 UTC m=+8.954237964 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert") pod "catalog-operator-588944557d-h7xl6" (UID: "2e618c5c-52be-4b52-b426-b92555dee9de") : secret "catalog-operator-serving-cert" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319389 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls podName:456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.319382035 +0000 UTC m=+8.954282335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls") pod "dns-operator-86b8869b79-cdltb" (UID: "456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd") : secret "metrics-tls" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: I0216 20:57:13.319404 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: I0216 20:57:13.319432 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: I0216 20:57:13.319459 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: I0216 20:57:13.319503 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: I0216 20:57:13.319521 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319627 7926 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319667 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert podName:2506c282-0b37-4ece-8a0c-885d0b7f7901 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.319641492 +0000 UTC m=+8.954541792 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-kh4d4" (UID: "2506c282-0b37-4ece-8a0c-885d0b7f7901") : secret "performance-addon-operator-webhook-cert" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319702 7926 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319722 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics podName:b28234d1-1d9a-4d9f-9ad1-e3c682bed492 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.319714104 +0000 UTC m=+8.954614404 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-6rmhq" (UID: "b28234d1-1d9a-4d9f-9ad1-e3c682bed492") : secret "marketplace-operator-metrics" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319750 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319769 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert podName:4b035e85-b2b0-4dee-bb86-3465fc4b98a8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.319763786 +0000 UTC m=+8.954664086 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-9m94g" (UID: "4b035e85-b2b0-4dee-bb86-3465fc4b98a8") : secret "package-server-manager-serving-cert" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319801 7926 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319816 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls podName:cef33294-81fb-41a2-811d-2565f94514d1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.319811667 +0000 UTC m=+8.954711967 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls") pod "ingress-operator-c588d8cb4-6ps2d" (UID: "cef33294-81fb-41a2-811d-2565f94514d1") : secret "metrics-tls" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319846 7926 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 20:57:13.320232 master-0 kubenswrapper[7926]: E0216 20:57:13.319861 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert podName:3a012b98-9341-41a3-9321-0a099f8bb9da nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.319856108 +0000 UTC m=+8.954756408 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert") pod "cluster-version-operator-76959b6567-7jlsw" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da") : secret "cluster-version-operator-serving-cert" not found Feb 16 20:57:13.325398 master-0 kubenswrapper[7926]: I0216 20:57:13.325345 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-xhmfs"] Feb 16 20:57:13.326856 master-0 kubenswrapper[7926]: I0216 20:57:13.326810 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.328868 master-0 kubenswrapper[7926]: I0216 20:57:13.328695 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 20:57:13.329460 master-0 kubenswrapper[7926]: I0216 20:57:13.329336 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 20:57:13.329804 master-0 kubenswrapper[7926]: I0216 20:57:13.329727 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 20:57:13.330042 master-0 kubenswrapper[7926]: I0216 20:57:13.330010 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 20:57:13.330736 master-0 kubenswrapper[7926]: I0216 20:57:13.330349 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 20:57:13.336197 master-0 kubenswrapper[7926]: I0216 20:57:13.334617 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 20:57:13.337718 master-0 kubenswrapper[7926]: I0216 20:57:13.337643 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-xhmfs"] Feb 16 20:57:13.408208 master-0 kubenswrapper[7926]: I0216 20:57:13.408113 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:13.420663 master-0 kubenswrapper[7926]: I0216 20:57:13.420617 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-client-ca\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.420800 master-0 kubenswrapper[7926]: I0216 20:57:13.420760 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7xr6\" (UniqueName: \"kubernetes.io/projected/05f16ec9-09e3-404c-9b9a-19ca97cd534b-kube-api-access-c7xr6\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.420948 master-0 kubenswrapper[7926]: I0216 20:57:13.420895 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.421103 master-0 kubenswrapper[7926]: I0216 20:57:13.421050 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.421141 master-0 kubenswrapper[7926]: I0216 20:57:13.421126 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05f16ec9-09e3-404c-9b9a-19ca97cd534b-serving-cert\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.444523 master-0 kubenswrapper[7926]: I0216 20:57:13.444468 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:13.444711 master-0 kubenswrapper[7926]: I0216 20:57:13.444688 7926 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 20:57:13.464574 master-0 kubenswrapper[7926]: I0216 20:57:13.464484 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:13.477583 master-0 kubenswrapper[7926]: I0216 20:57:13.477364 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg"] Feb 16 20:57:13.477975 master-0 kubenswrapper[7926]: I0216 20:57:13.477937 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:13.480643 master-0 kubenswrapper[7926]: I0216 20:57:13.480011 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 20:57:13.480643 master-0 kubenswrapper[7926]: I0216 20:57:13.480086 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 20:57:13.480643 master-0 kubenswrapper[7926]: I0216 20:57:13.480291 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 20:57:13.480643 master-0 kubenswrapper[7926]: I0216 20:57:13.480326 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 20:57:13.482402 master-0 kubenswrapper[7926]: I0216 20:57:13.482336 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 20:57:13.496030 master-0 kubenswrapper[7926]: I0216 20:57:13.495963 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg"] Feb 16 20:57:13.504505 master-0 kubenswrapper[7926]: I0216 20:57:13.504467 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: I0216 20:57:13.522081 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: I0216 20:57:13.522126 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/417544a2-2e10-46cf-b842-4cc8682d3e68-serving-cert\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: I0216 20:57:13.522619 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05f16ec9-09e3-404c-9b9a-19ca97cd534b-serving-cert\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: E0216 20:57:13.522962 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: I0216 20:57:13.522968 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: E0216 20:57:13.523024 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config podName:05f16ec9-09e3-404c-9b9a-19ca97cd534b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:14.02300709 +0000 UTC m=+5.657907600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config") pod "controller-manager-dc99ff586-xhmfs" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b") : configmap "config" not found Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: I0216 20:57:13.523046 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-client-ca\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: E0216 20:57:13.523107 7926 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: E0216 20:57:13.523719 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05f16ec9-09e3-404c-9b9a-19ca97cd534b-serving-cert podName:05f16ec9-09e3-404c-9b9a-19ca97cd534b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:14.023710228 +0000 UTC m=+5.658610748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/05f16ec9-09e3-404c-9b9a-19ca97cd534b-serving-cert") pod "controller-manager-dc99ff586-xhmfs" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b") : secret "serving-cert" not found Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: I0216 20:57:13.523766 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-client-ca\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: I0216 20:57:13.523813 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7xr6\" (UniqueName: \"kubernetes.io/projected/05f16ec9-09e3-404c-9b9a-19ca97cd534b-kube-api-access-c7xr6\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: I0216 20:57:13.523834 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cddl\" (UniqueName: \"kubernetes.io/projected/417544a2-2e10-46cf-b842-4cc8682d3e68-kube-api-access-9cddl\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: E0216 20:57:13.523888 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: E0216 20:57:13.523948 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-client-ca podName:05f16ec9-09e3-404c-9b9a-19ca97cd534b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:14.023927043 +0000 UTC m=+5.658827343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-client-ca") pod "controller-manager-dc99ff586-xhmfs" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b") : configmap "client-ca" not found Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: I0216 20:57:13.524014 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: E0216 20:57:13.524119 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 16 20:57:13.666369 master-0 kubenswrapper[7926]: E0216 20:57:13.524150 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles podName:05f16ec9-09e3-404c-9b9a-19ca97cd534b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:14.02414067 +0000 UTC m=+5.659041070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles") pod "controller-manager-dc99ff586-xhmfs" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b") : configmap "openshift-global-ca" not found Feb 16 20:57:13.667510 master-0 kubenswrapper[7926]: I0216 20:57:13.667004 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:13.667510 master-0 kubenswrapper[7926]: I0216 20:57:13.667063 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-client-ca\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:13.667510 master-0 kubenswrapper[7926]: E0216 20:57:13.667217 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:13.667510 master-0 kubenswrapper[7926]: E0216 20:57:13.667294 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-client-ca podName:417544a2-2e10-46cf-b842-4cc8682d3e68 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:14.167272206 +0000 UTC m=+5.802172506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-client-ca") pod "route-controller-manager-b4db4d545-857jg" (UID: "417544a2-2e10-46cf-b842-4cc8682d3e68") : configmap "client-ca" not found Feb 16 20:57:13.667510 master-0 kubenswrapper[7926]: E0216 20:57:13.667388 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Feb 16 20:57:13.667510 master-0 kubenswrapper[7926]: E0216 20:57:13.667420 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config podName:417544a2-2e10-46cf-b842-4cc8682d3e68 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:14.167410909 +0000 UTC m=+5.802311309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config") pod "route-controller-manager-b4db4d545-857jg" (UID: "417544a2-2e10-46cf-b842-4cc8682d3e68") : configmap "config" not found Feb 16 20:57:13.667510 master-0 kubenswrapper[7926]: I0216 20:57:13.667450 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cddl\" (UniqueName: \"kubernetes.io/projected/417544a2-2e10-46cf-b842-4cc8682d3e68-kube-api-access-9cddl\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:13.667843 master-0 kubenswrapper[7926]: I0216 20:57:13.667621 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/417544a2-2e10-46cf-b842-4cc8682d3e68-serving-cert\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:13.667843 master-0 kubenswrapper[7926]: E0216 20:57:13.667803 7926 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:13.667843 master-0 kubenswrapper[7926]: E0216 20:57:13.667837 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/417544a2-2e10-46cf-b842-4cc8682d3e68-serving-cert podName:417544a2-2e10-46cf-b842-4cc8682d3e68 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:14.16782761 +0000 UTC m=+5.802727910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/417544a2-2e10-46cf-b842-4cc8682d3e68-serving-cert") pod "route-controller-manager-b4db4d545-857jg" (UID: "417544a2-2e10-46cf-b842-4cc8682d3e68") : secret "serving-cert" not found Feb 16 20:57:13.700509 master-0 kubenswrapper[7926]: I0216 20:57:13.700450 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7xr6\" (UniqueName: \"kubernetes.io/projected/05f16ec9-09e3-404c-9b9a-19ca97cd534b-kube-api-access-c7xr6\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:13.713486 master-0 kubenswrapper[7926]: I0216 20:57:13.713433 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cddl\" (UniqueName: \"kubernetes.io/projected/417544a2-2e10-46cf-b842-4cc8682d3e68-kube-api-access-9cddl\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:13.865497 master-0 kubenswrapper[7926]: I0216 20:57:13.865445 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerStarted","Data":"d3647391d6c6aea748cff19ab3829b4c4308cc4ee2ef9a5eb37149acfef03e2f"} Feb 16 20:57:13.870620 master-0 kubenswrapper[7926]: I0216 20:57:13.870589 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 20:57:13.907265 master-0 kubenswrapper[7926]: I0216 20:57:13.906899 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:13.976700 master-0 kubenswrapper[7926]: I0216 20:57:13.976556 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-cbj2r"] Feb 16 20:57:13.977248 master-0 kubenswrapper[7926]: I0216 20:57:13.977225 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 20:57:13.978957 master-0 kubenswrapper[7926]: I0216 20:57:13.978926 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 20:57:13.979239 master-0 kubenswrapper[7926]: I0216 20:57:13.979210 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 20:57:13.980378 master-0 kubenswrapper[7926]: I0216 20:57:13.980360 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 20:57:13.981269 master-0 kubenswrapper[7926]: I0216 20:57:13.981223 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 20:57:13.984696 master-0 kubenswrapper[7926]: I0216 20:57:13.984611 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-cbj2r"] Feb 16 20:57:14.071365 master-0 kubenswrapper[7926]: I0216 20:57:14.071283 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:14.071542 master-0 kubenswrapper[7926]: E0216 20:57:14.071457 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 16 20:57:14.071619 master-0 kubenswrapper[7926]: I0216 20:57:14.071569 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv45g\" (UniqueName: \"kubernetes.io/projected/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-kube-api-access-hv45g\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 20:57:14.071710 master-0 kubenswrapper[7926]: E0216 20:57:14.071582 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config podName:05f16ec9-09e3-404c-9b9a-19ca97cd534b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.071551269 +0000 UTC m=+6.706451779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config") pod "controller-manager-dc99ff586-xhmfs" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b") : configmap "config" not found Feb 16 20:57:14.071754 master-0 kubenswrapper[7926]: I0216 20:57:14.071705 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05f16ec9-09e3-404c-9b9a-19ca97cd534b-serving-cert\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:14.071873 master-0 kubenswrapper[7926]: I0216 20:57:14.071828 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-key\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 20:57:14.072094 master-0 kubenswrapper[7926]: E0216 20:57:14.071917 7926 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:14.072094 master-0 kubenswrapper[7926]: I0216 20:57:14.071939 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-client-ca\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:14.072094 master-0 kubenswrapper[7926]: E0216 20:57:14.071970 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05f16ec9-09e3-404c-9b9a-19ca97cd534b-serving-cert podName:05f16ec9-09e3-404c-9b9a-19ca97cd534b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.071956749 +0000 UTC m=+6.706857249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/05f16ec9-09e3-404c-9b9a-19ca97cd534b-serving-cert") pod "controller-manager-dc99ff586-xhmfs" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b") : secret "serving-cert" not found Feb 16 20:57:14.072094 master-0 kubenswrapper[7926]: I0216 20:57:14.072032 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-cabundle\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 20:57:14.072094 master-0 kubenswrapper[7926]: E0216 20:57:14.072070 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:14.072279 master-0 kubenswrapper[7926]: I0216 20:57:14.072122 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:14.072279 master-0 kubenswrapper[7926]: E0216 20:57:14.072148 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-client-ca podName:05f16ec9-09e3-404c-9b9a-19ca97cd534b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.072119203 +0000 UTC m=+6.707019543 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-client-ca") pod "controller-manager-dc99ff586-xhmfs" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b") : configmap "client-ca" not found Feb 16 20:57:14.072279 master-0 kubenswrapper[7926]: E0216 20:57:14.072184 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 16 20:57:14.072279 master-0 kubenswrapper[7926]: E0216 20:57:14.072219 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles podName:05f16ec9-09e3-404c-9b9a-19ca97cd534b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.072209225 +0000 UTC m=+6.707109815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles") pod "controller-manager-dc99ff586-xhmfs" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b") : configmap "openshift-global-ca" not found Feb 16 20:57:14.173084 master-0 kubenswrapper[7926]: I0216 20:57:14.173031 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/417544a2-2e10-46cf-b842-4cc8682d3e68-serving-cert\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:14.173385 master-0 kubenswrapper[7926]: I0216 20:57:14.173098 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv45g\" (UniqueName: \"kubernetes.io/projected/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-kube-api-access-hv45g\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 20:57:14.173385 master-0 kubenswrapper[7926]: I0216 20:57:14.173201 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-key\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 20:57:14.173385 master-0 kubenswrapper[7926]: I0216 20:57:14.173226 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:14.173385 master-0 kubenswrapper[7926]: I0216 20:57:14.173253 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-client-ca\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:14.174802 master-0 kubenswrapper[7926]: I0216 20:57:14.173289 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-cabundle\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 20:57:14.176390 master-0 kubenswrapper[7926]: E0216 20:57:14.173369 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Feb 16 20:57:14.176390 master-0 kubenswrapper[7926]: I0216 20:57:14.175132 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-cabundle\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 20:57:14.176390 master-0 kubenswrapper[7926]: E0216 20:57:14.173581 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:14.176390 master-0 kubenswrapper[7926]: E0216 20:57:14.173611 7926 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:14.176390 master-0 kubenswrapper[7926]: E0216 20:57:14.176019 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config podName:417544a2-2e10-46cf-b842-4cc8682d3e68 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.175997739 +0000 UTC m=+6.810898039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config") pod "route-controller-manager-b4db4d545-857jg" (UID: "417544a2-2e10-46cf-b842-4cc8682d3e68") : configmap "config" not found Feb 16 20:57:14.176390 master-0 kubenswrapper[7926]: E0216 20:57:14.176361 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-client-ca podName:417544a2-2e10-46cf-b842-4cc8682d3e68 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.176349657 +0000 UTC m=+6.811249957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-client-ca") pod "route-controller-manager-b4db4d545-857jg" (UID: "417544a2-2e10-46cf-b842-4cc8682d3e68") : configmap "client-ca" not found Feb 16 20:57:14.176390 master-0 kubenswrapper[7926]: E0216 20:57:14.176377 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/417544a2-2e10-46cf-b842-4cc8682d3e68-serving-cert podName:417544a2-2e10-46cf-b842-4cc8682d3e68 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:15.176369048 +0000 UTC m=+6.811269348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/417544a2-2e10-46cf-b842-4cc8682d3e68-serving-cert") pod "route-controller-manager-b4db4d545-857jg" (UID: "417544a2-2e10-46cf-b842-4cc8682d3e68") : secret "serving-cert" not found Feb 16 20:57:14.177429 master-0 kubenswrapper[7926]: I0216 20:57:14.177384 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-key\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 20:57:14.191200 master-0 kubenswrapper[7926]: I0216 20:57:14.191157 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv45g\" (UniqueName: \"kubernetes.io/projected/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-kube-api-access-hv45g\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 20:57:14.313867 master-0 kubenswrapper[7926]: I0216 20:57:14.313803 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:14.320607 master-0 kubenswrapper[7926]: I0216 20:57:14.320554 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:14.333615 master-0 kubenswrapper[7926]: I0216 20:57:14.333556 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 20:57:14.781140 master-0 kubenswrapper[7926]: I0216 20:57:14.781000 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-xhmfs"] Feb 16 20:57:14.781362 master-0 kubenswrapper[7926]: E0216 20:57:14.781195 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" podUID="05f16ec9-09e3-404c-9b9a-19ca97cd534b" Feb 16 20:57:14.799476 master-0 kubenswrapper[7926]: I0216 20:57:14.799422 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg"] Feb 16 20:57:14.800297 master-0 kubenswrapper[7926]: E0216 20:57:14.800072 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" podUID="417544a2-2e10-46cf-b842-4cc8682d3e68" Feb 16 20:57:14.874692 master-0 kubenswrapper[7926]: I0216 20:57:14.873252 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:14.874692 master-0 kubenswrapper[7926]: I0216 20:57:14.873598 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-b68cj" event={"ID":"d9d71a7a-a751-4de4-9c76-9bac85fe0177","Type":"ContainerStarted","Data":"905fc5a621203c91395d6216f060ca53794b0ecb7785c24aec6c41ecccc20912"} Feb 16 20:57:14.874692 master-0 kubenswrapper[7926]: I0216 20:57:14.874207 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:14.881448 master-0 kubenswrapper[7926]: I0216 20:57:14.881406 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:14.883637 master-0 kubenswrapper[7926]: I0216 20:57:14.883263 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:14.888807 master-0 kubenswrapper[7926]: I0216 20:57:14.888774 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:14.898777 master-0 kubenswrapper[7926]: I0216 20:57:14.898697 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cddl\" (UniqueName: \"kubernetes.io/projected/417544a2-2e10-46cf-b842-4cc8682d3e68-kube-api-access-9cddl\") pod \"417544a2-2e10-46cf-b842-4cc8682d3e68\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " Feb 16 20:57:14.904861 master-0 kubenswrapper[7926]: I0216 20:57:14.904818 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/417544a2-2e10-46cf-b842-4cc8682d3e68-kube-api-access-9cddl" (OuterVolumeSpecName: "kube-api-access-9cddl") pod "417544a2-2e10-46cf-b842-4cc8682d3e68" (UID: "417544a2-2e10-46cf-b842-4cc8682d3e68"). InnerVolumeSpecName "kube-api-access-9cddl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:57:15.000936 master-0 kubenswrapper[7926]: I0216 20:57:15.000887 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7xr6\" (UniqueName: \"kubernetes.io/projected/05f16ec9-09e3-404c-9b9a-19ca97cd534b-kube-api-access-c7xr6\") pod \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " Feb 16 20:57:15.001747 master-0 kubenswrapper[7926]: I0216 20:57:15.001731 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cddl\" (UniqueName: \"kubernetes.io/projected/417544a2-2e10-46cf-b842-4cc8682d3e68-kube-api-access-9cddl\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:15.003494 master-0 kubenswrapper[7926]: I0216 20:57:15.003454 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f16ec9-09e3-404c-9b9a-19ca97cd534b-kube-api-access-c7xr6" (OuterVolumeSpecName: "kube-api-access-c7xr6") pod "05f16ec9-09e3-404c-9b9a-19ca97cd534b" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b"). InnerVolumeSpecName "kube-api-access-c7xr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:57:15.101856 master-0 kubenswrapper[7926]: I0216 20:57:15.101708 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:57:15.103222 master-0 kubenswrapper[7926]: I0216 20:57:15.102517 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05f16ec9-09e3-404c-9b9a-19ca97cd534b-serving-cert\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:15.103222 master-0 kubenswrapper[7926]: E0216 20:57:15.102602 7926 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:15.103222 master-0 kubenswrapper[7926]: I0216 20:57:15.102641 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-client-ca\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:15.103222 master-0 kubenswrapper[7926]: E0216 20:57:15.102666 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/05f16ec9-09e3-404c-9b9a-19ca97cd534b-serving-cert podName:05f16ec9-09e3-404c-9b9a-19ca97cd534b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.102638546 +0000 UTC m=+8.737538846 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/05f16ec9-09e3-404c-9b9a-19ca97cd534b-serving-cert") pod "controller-manager-dc99ff586-xhmfs" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b") : secret "serving-cert" not found Feb 16 20:57:15.103222 master-0 kubenswrapper[7926]: E0216 20:57:15.102842 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:15.103222 master-0 kubenswrapper[7926]: E0216 20:57:15.103154 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-client-ca podName:05f16ec9-09e3-404c-9b9a-19ca97cd534b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.103133178 +0000 UTC m=+8.738033478 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-client-ca") pod "controller-manager-dc99ff586-xhmfs" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b") : configmap "client-ca" not found Feb 16 20:57:15.103439 master-0 kubenswrapper[7926]: I0216 20:57:15.103231 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:15.103439 master-0 kubenswrapper[7926]: I0216 20:57:15.103421 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:15.103613 master-0 kubenswrapper[7926]: I0216 20:57:15.103520 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7xr6\" (UniqueName: \"kubernetes.io/projected/05f16ec9-09e3-404c-9b9a-19ca97cd534b-kube-api-access-c7xr6\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:15.104531 master-0 kubenswrapper[7926]: I0216 20:57:15.104509 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:15.105121 master-0 kubenswrapper[7926]: I0216 20:57:15.104888 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config\") pod \"controller-manager-dc99ff586-xhmfs\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:15.204149 master-0 kubenswrapper[7926]: I0216 20:57:15.204030 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles\") pod \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " Feb 16 20:57:15.204149 master-0 kubenswrapper[7926]: I0216 20:57:15.204146 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config\") pod \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\" (UID: \"05f16ec9-09e3-404c-9b9a-19ca97cd534b\") " Feb 16 20:57:15.204460 master-0 kubenswrapper[7926]: I0216 20:57:15.204391 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/417544a2-2e10-46cf-b842-4cc8682d3e68-serving-cert\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:15.204508 master-0 kubenswrapper[7926]: I0216 20:57:15.204463 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:15.204508 master-0 kubenswrapper[7926]: I0216 20:57:15.204490 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-client-ca\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:15.205182 master-0 kubenswrapper[7926]: I0216 20:57:15.204583 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "05f16ec9-09e3-404c-9b9a-19ca97cd534b" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:15.205182 master-0 kubenswrapper[7926]: E0216 20:57:15.204628 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:15.205182 master-0 kubenswrapper[7926]: E0216 20:57:15.204700 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-client-ca podName:417544a2-2e10-46cf-b842-4cc8682d3e68 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.204683424 +0000 UTC m=+8.839583724 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-client-ca") pod "route-controller-manager-b4db4d545-857jg" (UID: "417544a2-2e10-46cf-b842-4cc8682d3e68") : configmap "client-ca" not found Feb 16 20:57:15.205182 master-0 kubenswrapper[7926]: E0216 20:57:15.204775 7926 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:15.205182 master-0 kubenswrapper[7926]: E0216 20:57:15.204796 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/417544a2-2e10-46cf-b842-4cc8682d3e68-serving-cert podName:417544a2-2e10-46cf-b842-4cc8682d3e68 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.204790467 +0000 UTC m=+8.839690767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/417544a2-2e10-46cf-b842-4cc8682d3e68-serving-cert") pod "route-controller-manager-b4db4d545-857jg" (UID: "417544a2-2e10-46cf-b842-4cc8682d3e68") : secret "serving-cert" not found Feb 16 20:57:15.205182 master-0 kubenswrapper[7926]: I0216 20:57:15.204966 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config" (OuterVolumeSpecName: "config") pod "05f16ec9-09e3-404c-9b9a-19ca97cd534b" (UID: "05f16ec9-09e3-404c-9b9a-19ca97cd534b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:15.206044 master-0 kubenswrapper[7926]: I0216 20:57:15.205972 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config\") pod \"route-controller-manager-b4db4d545-857jg\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:15.305823 master-0 kubenswrapper[7926]: I0216 20:57:15.305768 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config\") pod \"417544a2-2e10-46cf-b842-4cc8682d3e68\" (UID: \"417544a2-2e10-46cf-b842-4cc8682d3e68\") " Feb 16 20:57:15.306207 master-0 kubenswrapper[7926]: I0216 20:57:15.306052 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-config\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:15.306207 master-0 kubenswrapper[7926]: I0216 20:57:15.306065 7926 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:15.306379 master-0 kubenswrapper[7926]: I0216 20:57:15.306348 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config" (OuterVolumeSpecName: "config") pod "417544a2-2e10-46cf-b842-4cc8682d3e68" (UID: "417544a2-2e10-46cf-b842-4cc8682d3e68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:15.394012 master-0 kubenswrapper[7926]: I0216 20:57:15.393571 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-cbj2r"] Feb 16 20:57:15.408427 master-0 kubenswrapper[7926]: I0216 20:57:15.407158 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-config\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:15.763630 master-0 kubenswrapper[7926]: W0216 20:57:15.763571 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99ab949e_bd0d_45a7_95d1_8381d9f1f5f3.slice/crio-c4765e33cdc956d84e8349da9b28a001d07fad6c39b6a113416bb9d1d1ae88dd WatchSource:0}: Error finding container c4765e33cdc956d84e8349da9b28a001d07fad6c39b6a113416bb9d1d1ae88dd: Status 404 returned error can't find the container with id c4765e33cdc956d84e8349da9b28a001d07fad6c39b6a113416bb9d1d1ae88dd Feb 16 20:57:15.881009 master-0 kubenswrapper[7926]: I0216 20:57:15.880971 7926 generic.go:334] "Generic (PLEG): container finished" podID="5e062e07-8076-444c-b476-4eb2848e9613" containerID="9949cb3f0ffb40ac03674e827a655fd8962fd631e7432c2ead34043e0e4d8864" exitCode=0 Feb 16 20:57:15.882313 master-0 kubenswrapper[7926]: I0216 20:57:15.881049 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerDied","Data":"9949cb3f0ffb40ac03674e827a655fd8962fd631e7432c2ead34043e0e4d8864"} Feb 16 20:57:15.883352 master-0 kubenswrapper[7926]: I0216 20:57:15.883296 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" event={"ID":"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3","Type":"ContainerStarted","Data":"c4765e33cdc956d84e8349da9b28a001d07fad6c39b6a113416bb9d1d1ae88dd"} Feb 16 20:57:15.887907 master-0 kubenswrapper[7926]: I0216 20:57:15.887872 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-xhmfs" Feb 16 20:57:15.888502 master-0 kubenswrapper[7926]: I0216 20:57:15.888455 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" event={"ID":"684a8167-6c5b-430f-979e-307e58487611","Type":"ContainerStarted","Data":"f4d30cfe8bb36366ad4695d85f303021c475d8a0ec5ee46e2609d8eb9859e8ea"} Feb 16 20:57:15.888609 master-0 kubenswrapper[7926]: I0216 20:57:15.888565 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg" Feb 16 20:57:15.940274 master-0 kubenswrapper[7926]: I0216 20:57:15.940221 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs"] Feb 16 20:57:15.946409 master-0 kubenswrapper[7926]: I0216 20:57:15.946252 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-xhmfs"] Feb 16 20:57:15.946409 master-0 kubenswrapper[7926]: I0216 20:57:15.946362 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:15.947156 master-0 kubenswrapper[7926]: I0216 20:57:15.947111 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-xhmfs"] Feb 16 20:57:15.950155 master-0 kubenswrapper[7926]: I0216 20:57:15.949259 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 20:57:15.950155 master-0 kubenswrapper[7926]: I0216 20:57:15.949508 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 20:57:15.950155 master-0 kubenswrapper[7926]: I0216 20:57:15.949741 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 20:57:15.952342 master-0 kubenswrapper[7926]: I0216 20:57:15.952297 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 20:57:15.952884 master-0 kubenswrapper[7926]: I0216 20:57:15.952744 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 20:57:15.955604 master-0 kubenswrapper[7926]: I0216 20:57:15.955560 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs"] Feb 16 20:57:15.958686 master-0 kubenswrapper[7926]: I0216 20:57:15.958623 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 20:57:15.975463 master-0 kubenswrapper[7926]: I0216 20:57:15.975381 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg"] Feb 16 20:57:15.978252 master-0 kubenswrapper[7926]: I0216 20:57:15.978203 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg"] Feb 16 20:57:15.979782 master-0 kubenswrapper[7926]: I0216 20:57:15.979738 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:16.022790 master-0 kubenswrapper[7926]: I0216 20:57:16.019210 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-client-ca\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.022790 master-0 kubenswrapper[7926]: I0216 20:57:16.019253 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-proxy-ca-bundles\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.022790 master-0 kubenswrapper[7926]: I0216 20:57:16.019287 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lw5v\" (UniqueName: \"kubernetes.io/projected/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-kube-api-access-7lw5v\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.022790 master-0 kubenswrapper[7926]: I0216 20:57:16.019350 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-config\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.022790 master-0 kubenswrapper[7926]: I0216 20:57:16.019366 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-serving-cert\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.022790 master-0 kubenswrapper[7926]: I0216 20:57:16.019402 7926 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05f16ec9-09e3-404c-9b9a-19ca97cd534b-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:16.022790 master-0 kubenswrapper[7926]: I0216 20:57:16.019415 7926 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05f16ec9-09e3-404c-9b9a-19ca97cd534b-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:16.022790 master-0 kubenswrapper[7926]: I0216 20:57:16.019423 7926 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/417544a2-2e10-46cf-b842-4cc8682d3e68-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:16.022790 master-0 kubenswrapper[7926]: I0216 20:57:16.019434 7926 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/417544a2-2e10-46cf-b842-4cc8682d3e68-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:16.120755 master-0 kubenswrapper[7926]: I0216 20:57:16.120686 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-client-ca\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.121010 master-0 kubenswrapper[7926]: I0216 20:57:16.120757 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-proxy-ca-bundles\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.121010 master-0 kubenswrapper[7926]: I0216 20:57:16.120855 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lw5v\" (UniqueName: \"kubernetes.io/projected/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-kube-api-access-7lw5v\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.121010 master-0 kubenswrapper[7926]: I0216 20:57:16.120966 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-config\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.121107 master-0 kubenswrapper[7926]: I0216 20:57:16.121009 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-serving-cert\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.121323 master-0 kubenswrapper[7926]: E0216 20:57:16.121246 7926 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:16.121556 master-0 kubenswrapper[7926]: E0216 20:57:16.121338 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-serving-cert podName:28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:16.621312393 +0000 UTC m=+8.256212703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-serving-cert") pod "controller-manager-6bb489d9cc-dfbcs" (UID: "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd") : secret "serving-cert" not found Feb 16 20:57:16.121556 master-0 kubenswrapper[7926]: E0216 20:57:16.121399 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:16.121556 master-0 kubenswrapper[7926]: E0216 20:57:16.121433 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-client-ca podName:28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:16.621421876 +0000 UTC m=+8.256322196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-client-ca") pod "controller-manager-6bb489d9cc-dfbcs" (UID: "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd") : configmap "client-ca" not found Feb 16 20:57:16.123251 master-0 kubenswrapper[7926]: I0216 20:57:16.123219 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-proxy-ca-bundles\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.123883 master-0 kubenswrapper[7926]: I0216 20:57:16.123835 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-config\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.142962 master-0 kubenswrapper[7926]: I0216 20:57:16.142742 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lw5v\" (UniqueName: \"kubernetes.io/projected/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-kube-api-access-7lw5v\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.506259 master-0 kubenswrapper[7926]: I0216 20:57:16.506215 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs"] Feb 16 20:57:16.506528 master-0 kubenswrapper[7926]: E0216 20:57:16.506388 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" podUID="28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd" Feb 16 20:57:16.626210 master-0 kubenswrapper[7926]: I0216 20:57:16.626143 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-serving-cert\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.626456 master-0 kubenswrapper[7926]: E0216 20:57:16.626348 7926 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:16.626456 master-0 kubenswrapper[7926]: E0216 20:57:16.626434 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-serving-cert podName:28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.626417762 +0000 UTC m=+9.261318062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-serving-cert") pod "controller-manager-6bb489d9cc-dfbcs" (UID: "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd") : secret "serving-cert" not found Feb 16 20:57:16.626602 master-0 kubenswrapper[7926]: I0216 20:57:16.626553 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-client-ca\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.626875 master-0 kubenswrapper[7926]: E0216 20:57:16.626837 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:16.626940 master-0 kubenswrapper[7926]: E0216 20:57:16.626886 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-client-ca podName:28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:17.626874935 +0000 UTC m=+9.261775245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-client-ca") pod "controller-manager-6bb489d9cc-dfbcs" (UID: "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd") : configmap "client-ca" not found Feb 16 20:57:16.710489 master-0 kubenswrapper[7926]: I0216 20:57:16.710371 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:16.716008 master-0 kubenswrapper[7926]: I0216 20:57:16.715954 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:16.749505 master-0 kubenswrapper[7926]: I0216 20:57:16.749454 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05f16ec9-09e3-404c-9b9a-19ca97cd534b" path="/var/lib/kubelet/pods/05f16ec9-09e3-404c-9b9a-19ca97cd534b/volumes" Feb 16 20:57:16.749825 master-0 kubenswrapper[7926]: I0216 20:57:16.749805 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="417544a2-2e10-46cf-b842-4cc8682d3e68" path="/var/lib/kubelet/pods/417544a2-2e10-46cf-b842-4cc8682d3e68/volumes" Feb 16 20:57:16.898763 master-0 kubenswrapper[7926]: I0216 20:57:16.898428 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" event={"ID":"684a8167-6c5b-430f-979e-307e58487611","Type":"ContainerStarted","Data":"05dd664dbe24b23e49df336a132aa75287844fdfc867ac2f9b9486c0cca53e74"} Feb 16 20:57:16.904677 master-0 kubenswrapper[7926]: I0216 20:57:16.904403 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerStarted","Data":"6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c"} Feb 16 20:57:16.915686 master-0 kubenswrapper[7926]: I0216 20:57:16.914771 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" event={"ID":"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3","Type":"ContainerStarted","Data":"0c4056212013eaff1f5d405532bbe8e1791cff62d95615157652d9167450664a"} Feb 16 20:57:16.915686 master-0 kubenswrapper[7926]: I0216 20:57:16.914911 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.929612 master-0 kubenswrapper[7926]: I0216 20:57:16.928365 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" podStartSLOduration=3.081180767 podStartE2EDuration="5.928343229s" podCreationTimestamp="2026-02-16 20:57:11 +0000 UTC" firstStartedPulling="2026-02-16 20:57:12.381629953 +0000 UTC m=+4.016530253" lastFinishedPulling="2026-02-16 20:57:15.228792415 +0000 UTC m=+6.863692715" observedRunningTime="2026-02-16 20:57:16.9229364 +0000 UTC m=+8.557836700" watchObservedRunningTime="2026-02-16 20:57:16.928343229 +0000 UTC m=+8.563243529" Feb 16 20:57:16.938684 master-0 kubenswrapper[7926]: I0216 20:57:16.936827 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:57:16.945756 master-0 kubenswrapper[7926]: I0216 20:57:16.945152 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:16.964764 master-0 kubenswrapper[7926]: I0216 20:57:16.962523 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podStartSLOduration=2.343874462 podStartE2EDuration="4.962500609s" podCreationTimestamp="2026-02-16 20:57:12 +0000 UTC" firstStartedPulling="2026-02-16 20:57:13.194764795 +0000 UTC m=+4.829665095" lastFinishedPulling="2026-02-16 20:57:15.813390942 +0000 UTC m=+7.448291242" observedRunningTime="2026-02-16 20:57:16.961466933 +0000 UTC m=+8.596367233" watchObservedRunningTime="2026-02-16 20:57:16.962500609 +0000 UTC m=+8.597400909" Feb 16 20:57:17.011035 master-0 kubenswrapper[7926]: I0216 20:57:17.010626 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" podStartSLOduration=4.010604488 podStartE2EDuration="4.010604488s" podCreationTimestamp="2026-02-16 20:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:16.9870136 +0000 UTC m=+8.621913900" watchObservedRunningTime="2026-02-16 20:57:17.010604488 +0000 UTC m=+8.645504788" Feb 16 20:57:17.031490 master-0 kubenswrapper[7926]: I0216 20:57:17.031399 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-config\") pod \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " Feb 16 20:57:17.031490 master-0 kubenswrapper[7926]: I0216 20:57:17.031481 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-proxy-ca-bundles\") pod \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " Feb 16 20:57:17.031821 master-0 kubenswrapper[7926]: I0216 20:57:17.031526 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lw5v\" (UniqueName: \"kubernetes.io/projected/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-kube-api-access-7lw5v\") pod \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " Feb 16 20:57:17.031998 master-0 kubenswrapper[7926]: I0216 20:57:17.031899 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-config" (OuterVolumeSpecName: "config") pod "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd" (UID: "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:17.033396 master-0 kubenswrapper[7926]: I0216 20:57:17.033170 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd" (UID: "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:17.035818 master-0 kubenswrapper[7926]: I0216 20:57:17.035751 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-kube-api-access-7lw5v" (OuterVolumeSpecName: "kube-api-access-7lw5v") pod "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd" (UID: "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd"). InnerVolumeSpecName "kube-api-access-7lw5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:57:17.132785 master-0 kubenswrapper[7926]: I0216 20:57:17.132720 7926 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:17.132785 master-0 kubenswrapper[7926]: I0216 20:57:17.132753 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lw5v\" (UniqueName: \"kubernetes.io/projected/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-kube-api-access-7lw5v\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:17.132785 master-0 kubenswrapper[7926]: I0216 20:57:17.132763 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-config\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:17.336234 master-0 kubenswrapper[7926]: I0216 20:57:17.336145 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:17.336855 master-0 kubenswrapper[7926]: I0216 20:57:17.336803 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:17.336921 master-0 kubenswrapper[7926]: I0216 20:57:17.336879 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:17.337562 master-0 kubenswrapper[7926]: I0216 20:57:17.337519 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:17.337650 master-0 kubenswrapper[7926]: I0216 20:57:17.337573 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:17.337650 master-0 kubenswrapper[7926]: E0216 20:57:17.337576 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 20:57:17.337729 master-0 kubenswrapper[7926]: E0216 20:57:17.337692 7926 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 20:57:17.337729 master-0 kubenswrapper[7926]: E0216 20:57:17.337713 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert podName:a4c9b781-14c0-469c-bb9e-0c3982a04520 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.337691662 +0000 UTC m=+16.972592052 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert") pod "olm-operator-6b56bd877c-vlhvq" (UID: "a4c9b781-14c0-469c-bb9e-0c3982a04520") : secret "olm-operator-serving-cert" not found Feb 16 20:57:17.337811 master-0 kubenswrapper[7926]: I0216 20:57:17.337625 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:17.337811 master-0 kubenswrapper[7926]: E0216 20:57:17.337751 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls podName:9e0227bc-63f5-48be-95dc-1323a2b2e327 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.337733893 +0000 UTC m=+16.972634283 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-4gczb" (UID: "9e0227bc-63f5-48be-95dc-1323a2b2e327") : secret "image-registry-operator-tls" not found Feb 16 20:57:17.337811 master-0 kubenswrapper[7926]: I0216 20:57:17.337794 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:17.337905 master-0 kubenswrapper[7926]: I0216 20:57:17.337835 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:17.337905 master-0 kubenswrapper[7926]: I0216 20:57:17.337863 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:17.337965 master-0 kubenswrapper[7926]: I0216 20:57:17.337926 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:17.337965 master-0 kubenswrapper[7926]: I0216 20:57:17.337954 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:17.338016 master-0 kubenswrapper[7926]: I0216 20:57:17.337974 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:17.338016 master-0 kubenswrapper[7926]: I0216 20:57:17.337998 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:17.338185 master-0 kubenswrapper[7926]: E0216 20:57:17.337800 7926 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 20:57:17.338235 master-0 kubenswrapper[7926]: E0216 20:57:17.338192 7926 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 20:57:17.338295 master-0 kubenswrapper[7926]: E0216 20:57:17.338245 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 20:57:17.338295 master-0 kubenswrapper[7926]: E0216 20:57:17.338193 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.338185686 +0000 UTC m=+16.973085986 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : secret "multus-admission-controller-secret" not found Feb 16 20:57:17.338295 master-0 kubenswrapper[7926]: E0216 20:57:17.338278 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics podName:b28234d1-1d9a-4d9f-9ad1-e3c682bed492 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.338270258 +0000 UTC m=+16.973170558 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-6rmhq" (UID: "b28234d1-1d9a-4d9f-9ad1-e3c682bed492") : secret "marketplace-operator-metrics" not found Feb 16 20:57:17.338295 master-0 kubenswrapper[7926]: E0216 20:57:17.338287 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert podName:4b035e85-b2b0-4dee-bb86-3465fc4b98a8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.338282668 +0000 UTC m=+16.973183098 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-9m94g" (UID: "4b035e85-b2b0-4dee-bb86-3465fc4b98a8") : secret "package-server-manager-serving-cert" not found Feb 16 20:57:17.338295 master-0 kubenswrapper[7926]: E0216 20:57:17.338108 7926 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 20:57:17.338537 master-0 kubenswrapper[7926]: E0216 20:57:17.338309 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.338304709 +0000 UTC m=+16.973205009 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : secret "metrics-daemon-secret" not found Feb 16 20:57:17.338537 master-0 kubenswrapper[7926]: E0216 20:57:17.338136 7926 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:17.338537 master-0 kubenswrapper[7926]: E0216 20:57:17.338332 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls podName:ec7dd4ea-a139-45d4-96a4-506da1567292 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.338327329 +0000 UTC m=+16.973227629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-w57zn" (UID: "ec7dd4ea-a139-45d4-96a4-506da1567292") : secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:17.338537 master-0 kubenswrapper[7926]: E0216 20:57:17.338155 7926 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:17.338537 master-0 kubenswrapper[7926]: E0216 20:57:17.338354 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls podName:456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.33835007 +0000 UTC m=+16.973250370 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls") pod "dns-operator-86b8869b79-cdltb" (UID: "456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd") : secret "metrics-tls" not found Feb 16 20:57:17.338537 master-0 kubenswrapper[7926]: E0216 20:57:17.338165 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 20:57:17.338537 master-0 kubenswrapper[7926]: E0216 20:57:17.338372 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert podName:2e618c5c-52be-4b52-b426-b92555dee9de nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.33836874 +0000 UTC m=+16.973269040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert") pod "catalog-operator-588944557d-h7xl6" (UID: "2e618c5c-52be-4b52-b426-b92555dee9de") : secret "catalog-operator-serving-cert" not found Feb 16 20:57:17.338537 master-0 kubenswrapper[7926]: E0216 20:57:17.338082 7926 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 20:57:17.338537 master-0 kubenswrapper[7926]: E0216 20:57:17.338418 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls podName:cef33294-81fb-41a2-811d-2565f94514d1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.338391411 +0000 UTC m=+16.973291711 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls") pod "ingress-operator-c588d8cb4-6ps2d" (UID: "cef33294-81fb-41a2-811d-2565f94514d1") : secret "metrics-tls" not found Feb 16 20:57:17.341289 master-0 kubenswrapper[7926]: I0216 20:57:17.339990 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"cluster-version-operator-76959b6567-7jlsw\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:17.341289 master-0 kubenswrapper[7926]: I0216 20:57:17.341035 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:17.345387 master-0 kubenswrapper[7926]: I0216 20:57:17.345351 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:17.364569 master-0 kubenswrapper[7926]: I0216 20:57:17.364046 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:17.364569 master-0 kubenswrapper[7926]: I0216 20:57:17.364095 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 20:57:17.640958 master-0 kubenswrapper[7926]: I0216 20:57:17.640526 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4"] Feb 16 20:57:17.647509 master-0 kubenswrapper[7926]: I0216 20:57:17.647264 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-client-ca\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:17.647509 master-0 kubenswrapper[7926]: E0216 20:57:17.647426 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:17.647509 master-0 kubenswrapper[7926]: E0216 20:57:17.647512 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-client-ca podName:28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:19.647490403 +0000 UTC m=+11.282390703 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-client-ca") pod "controller-manager-6bb489d9cc-dfbcs" (UID: "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd") : configmap "client-ca" not found Feb 16 20:57:17.648382 master-0 kubenswrapper[7926]: I0216 20:57:17.647563 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-serving-cert\") pod \"controller-manager-6bb489d9cc-dfbcs\" (UID: \"28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd\") " pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:17.648382 master-0 kubenswrapper[7926]: E0216 20:57:17.647779 7926 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:17.648382 master-0 kubenswrapper[7926]: E0216 20:57:17.647858 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-serving-cert podName:28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd nodeName:}" failed. No retries permitted until 2026-02-16 20:57:19.647834361 +0000 UTC m=+11.282734771 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-serving-cert") pod "controller-manager-6bb489d9cc-dfbcs" (UID: "28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd") : secret "serving-cert" not found Feb 16 20:57:17.919319 master-0 kubenswrapper[7926]: I0216 20:57:17.919177 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" event={"ID":"2506c282-0b37-4ece-8a0c-885d0b7f7901","Type":"ContainerStarted","Data":"0c4934055dbc002aad718ae831c2d636c9e3bd49545da85cae7eace9dea452ac"} Feb 16 20:57:17.920426 master-0 kubenswrapper[7926]: I0216 20:57:17.920288 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" event={"ID":"3a012b98-9341-41a3-9321-0a099f8bb9da","Type":"ContainerStarted","Data":"967086e3afdf48136bba09dec7a50552d530cabf996c944b7aa7f47f1a0f30ff"} Feb 16 20:57:17.920426 master-0 kubenswrapper[7926]: I0216 20:57:17.920381 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs" Feb 16 20:57:17.963976 master-0 kubenswrapper[7926]: I0216 20:57:17.963881 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7585c94cb9-9n49k"] Feb 16 20:57:17.965535 master-0 kubenswrapper[7926]: I0216 20:57:17.964662 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:17.966964 master-0 kubenswrapper[7926]: I0216 20:57:17.966943 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 20:57:17.967086 master-0 kubenswrapper[7926]: I0216 20:57:17.966971 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 20:57:17.967211 master-0 kubenswrapper[7926]: I0216 20:57:17.967011 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 20:57:17.967307 master-0 kubenswrapper[7926]: I0216 20:57:17.967016 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 20:57:17.967802 master-0 kubenswrapper[7926]: I0216 20:57:17.967765 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs"] Feb 16 20:57:17.968400 master-0 kubenswrapper[7926]: I0216 20:57:17.968248 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 20:57:17.978970 master-0 kubenswrapper[7926]: I0216 20:57:17.974473 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 20:57:17.978970 master-0 kubenswrapper[7926]: I0216 20:57:17.974947 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7585c94cb9-9n49k"] Feb 16 20:57:17.978970 master-0 kubenswrapper[7926]: I0216 20:57:17.976281 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs"] Feb 16 20:57:18.053239 master-0 kubenswrapper[7926]: I0216 20:57:18.053096 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-proxy-ca-bundles\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.053239 master-0 kubenswrapper[7926]: I0216 20:57:18.053156 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.053239 master-0 kubenswrapper[7926]: I0216 20:57:18.053194 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-config\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.053606 master-0 kubenswrapper[7926]: I0216 20:57:18.053372 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.053606 master-0 kubenswrapper[7926]: I0216 20:57:18.053428 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scd4p\" (UniqueName: \"kubernetes.io/projected/d73c2079-07e1-4465-83eb-5d39a04baf7d-kube-api-access-scd4p\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.053606 master-0 kubenswrapper[7926]: I0216 20:57:18.053493 7926 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:18.053606 master-0 kubenswrapper[7926]: I0216 20:57:18.053506 7926 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:18.154830 master-0 kubenswrapper[7926]: I0216 20:57:18.154762 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-proxy-ca-bundles\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.156222 master-0 kubenswrapper[7926]: I0216 20:57:18.156176 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-proxy-ca-bundles\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.156292 master-0 kubenswrapper[7926]: I0216 20:57:18.156253 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.156359 master-0 kubenswrapper[7926]: I0216 20:57:18.156323 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-config\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.156402 master-0 kubenswrapper[7926]: I0216 20:57:18.156390 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.156470 master-0 kubenswrapper[7926]: I0216 20:57:18.156437 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scd4p\" (UniqueName: \"kubernetes.io/projected/d73c2079-07e1-4465-83eb-5d39a04baf7d-kube-api-access-scd4p\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.156909 master-0 kubenswrapper[7926]: E0216 20:57:18.156879 7926 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:18.156956 master-0 kubenswrapper[7926]: E0216 20:57:18.156933 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert podName:d73c2079-07e1-4465-83eb-5d39a04baf7d nodeName:}" failed. No retries permitted until 2026-02-16 20:57:18.656918002 +0000 UTC m=+10.291818302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert") pod "controller-manager-7585c94cb9-9n49k" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d") : secret "serving-cert" not found Feb 16 20:57:18.159389 master-0 kubenswrapper[7926]: E0216 20:57:18.159358 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:18.159446 master-0 kubenswrapper[7926]: E0216 20:57:18.159403 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca podName:d73c2079-07e1-4465-83eb-5d39a04baf7d nodeName:}" failed. No retries permitted until 2026-02-16 20:57:18.659392886 +0000 UTC m=+10.294293176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca") pod "controller-manager-7585c94cb9-9n49k" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d") : configmap "client-ca" not found Feb 16 20:57:18.159591 master-0 kubenswrapper[7926]: I0216 20:57:18.159559 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-config\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.178848 master-0 kubenswrapper[7926]: I0216 20:57:18.178777 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scd4p\" (UniqueName: \"kubernetes.io/projected/d73c2079-07e1-4465-83eb-5d39a04baf7d-kube-api-access-scd4p\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.662111 master-0 kubenswrapper[7926]: I0216 20:57:18.662051 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.662409 master-0 kubenswrapper[7926]: E0216 20:57:18.662332 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:18.662525 master-0 kubenswrapper[7926]: I0216 20:57:18.662464 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:18.662525 master-0 kubenswrapper[7926]: E0216 20:57:18.662515 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca podName:d73c2079-07e1-4465-83eb-5d39a04baf7d nodeName:}" failed. No retries permitted until 2026-02-16 20:57:19.662472564 +0000 UTC m=+11.297372864 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca") pod "controller-manager-7585c94cb9-9n49k" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d") : configmap "client-ca" not found Feb 16 20:57:18.662774 master-0 kubenswrapper[7926]: E0216 20:57:18.662746 7926 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:18.662853 master-0 kubenswrapper[7926]: E0216 20:57:18.662834 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert podName:d73c2079-07e1-4465-83eb-5d39a04baf7d nodeName:}" failed. No retries permitted until 2026-02-16 20:57:19.662809823 +0000 UTC m=+11.297710113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert") pod "controller-manager-7585c94cb9-9n49k" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d") : secret "serving-cert" not found Feb 16 20:57:18.746783 master-0 kubenswrapper[7926]: I0216 20:57:18.746726 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd" path="/var/lib/kubelet/pods/28a9ca76-2004-4f18-8c9d-54b1b9b4f1cd/volumes" Feb 16 20:57:18.821416 master-0 kubenswrapper[7926]: I0216 20:57:18.821202 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2"] Feb 16 20:57:18.821940 master-0 kubenswrapper[7926]: I0216 20:57:18.821801 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:18.824787 master-0 kubenswrapper[7926]: I0216 20:57:18.824162 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 20:57:18.824787 master-0 kubenswrapper[7926]: I0216 20:57:18.824460 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 20:57:18.824787 master-0 kubenswrapper[7926]: I0216 20:57:18.824623 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 20:57:18.824935 master-0 kubenswrapper[7926]: I0216 20:57:18.824792 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 20:57:18.827584 master-0 kubenswrapper[7926]: I0216 20:57:18.827546 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 20:57:18.843148 master-0 kubenswrapper[7926]: I0216 20:57:18.843088 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2"] Feb 16 20:57:18.864601 master-0 kubenswrapper[7926]: I0216 20:57:18.864534 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:18.864601 master-0 kubenswrapper[7926]: I0216 20:57:18.864604 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-config\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:18.864858 master-0 kubenswrapper[7926]: I0216 20:57:18.864692 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4k59\" (UniqueName: \"kubernetes.io/projected/e6701f93-e666-4aaf-b1b4-b4464c586a24-kube-api-access-l4k59\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:18.865036 master-0 kubenswrapper[7926]: I0216 20:57:18.864997 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:18.965704 master-0 kubenswrapper[7926]: I0216 20:57:18.965658 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:18.966235 master-0 kubenswrapper[7926]: I0216 20:57:18.965746 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:18.966235 master-0 kubenswrapper[7926]: I0216 20:57:18.965772 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-config\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:18.966235 master-0 kubenswrapper[7926]: I0216 20:57:18.965794 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4k59\" (UniqueName: \"kubernetes.io/projected/e6701f93-e666-4aaf-b1b4-b4464c586a24-kube-api-access-l4k59\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:18.966235 master-0 kubenswrapper[7926]: E0216 20:57:18.965849 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:18.966235 master-0 kubenswrapper[7926]: E0216 20:57:18.965907 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca podName:e6701f93-e666-4aaf-b1b4-b4464c586a24 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:19.465891909 +0000 UTC m=+11.100792209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca") pod "route-controller-manager-599565c7b6-fsxd2" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24") : configmap "client-ca" not found Feb 16 20:57:18.966235 master-0 kubenswrapper[7926]: E0216 20:57:18.966020 7926 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:18.966235 master-0 kubenswrapper[7926]: E0216 20:57:18.966057 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert podName:e6701f93-e666-4aaf-b1b4-b4464c586a24 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:19.466049273 +0000 UTC m=+11.100949563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert") pod "route-controller-manager-599565c7b6-fsxd2" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24") : secret "serving-cert" not found Feb 16 20:57:18.966942 master-0 kubenswrapper[7926]: I0216 20:57:18.966914 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-config\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:18.996310 master-0 kubenswrapper[7926]: I0216 20:57:18.996207 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4k59\" (UniqueName: \"kubernetes.io/projected/e6701f93-e666-4aaf-b1b4-b4464c586a24-kube-api-access-l4k59\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:18.997648 master-0 kubenswrapper[7926]: I0216 20:57:18.997065 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:57:19.470897 master-0 kubenswrapper[7926]: I0216 20:57:19.470515 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:19.471218 master-0 kubenswrapper[7926]: I0216 20:57:19.470921 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:19.471218 master-0 kubenswrapper[7926]: E0216 20:57:19.470666 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:19.471218 master-0 kubenswrapper[7926]: E0216 20:57:19.471031 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca podName:e6701f93-e666-4aaf-b1b4-b4464c586a24 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:20.471009729 +0000 UTC m=+12.105910029 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca") pod "route-controller-manager-599565c7b6-fsxd2" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24") : configmap "client-ca" not found Feb 16 20:57:19.471218 master-0 kubenswrapper[7926]: E0216 20:57:19.471040 7926 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:19.471218 master-0 kubenswrapper[7926]: E0216 20:57:19.471104 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert podName:e6701f93-e666-4aaf-b1b4-b4464c586a24 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:20.471088011 +0000 UTC m=+12.105988311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert") pod "route-controller-manager-599565c7b6-fsxd2" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24") : secret "serving-cert" not found Feb 16 20:57:19.674118 master-0 kubenswrapper[7926]: I0216 20:57:19.674013 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:19.674438 master-0 kubenswrapper[7926]: I0216 20:57:19.674268 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:19.674438 master-0 kubenswrapper[7926]: E0216 20:57:19.674352 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:19.674438 master-0 kubenswrapper[7926]: E0216 20:57:19.674416 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca podName:d73c2079-07e1-4465-83eb-5d39a04baf7d nodeName:}" failed. No retries permitted until 2026-02-16 20:57:21.674398135 +0000 UTC m=+13.309298435 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca") pod "controller-manager-7585c94cb9-9n49k" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d") : configmap "client-ca" not found Feb 16 20:57:19.681480 master-0 kubenswrapper[7926]: I0216 20:57:19.681394 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:19.946405 master-0 kubenswrapper[7926]: I0216 20:57:19.942528 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerStarted","Data":"ee117aab23c2955afe2d46ebc740378a94898d9f452c30c51846fd6b5013569e"} Feb 16 20:57:20.490635 master-0 kubenswrapper[7926]: I0216 20:57:20.490453 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:20.491619 master-0 kubenswrapper[7926]: I0216 20:57:20.490812 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:20.491816 master-0 kubenswrapper[7926]: E0216 20:57:20.491439 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:20.491911 master-0 kubenswrapper[7926]: E0216 20:57:20.491884 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca podName:e6701f93-e666-4aaf-b1b4-b4464c586a24 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:22.491850849 +0000 UTC m=+14.126751149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca") pod "route-controller-manager-599565c7b6-fsxd2" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24") : configmap "client-ca" not found Feb 16 20:57:20.493564 master-0 kubenswrapper[7926]: E0216 20:57:20.492516 7926 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:20.493564 master-0 kubenswrapper[7926]: E0216 20:57:20.492571 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert podName:e6701f93-e666-4aaf-b1b4-b4464c586a24 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:22.4925601 +0000 UTC m=+14.127460400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert") pod "route-controller-manager-599565c7b6-fsxd2" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24") : secret "serving-cert" not found Feb 16 20:57:20.948369 master-0 kubenswrapper[7926]: I0216 20:57:20.948298 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" event={"ID":"3a012b98-9341-41a3-9321-0a099f8bb9da","Type":"ContainerStarted","Data":"de2563beb136a0bbda40935e3b66cf97e3510fb56b6c3e8e1dcda8f4301e2a47"} Feb 16 20:57:21.712267 master-0 kubenswrapper[7926]: I0216 20:57:21.710587 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:21.712267 master-0 kubenswrapper[7926]: E0216 20:57:21.710753 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:21.712267 master-0 kubenswrapper[7926]: E0216 20:57:21.710804 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca podName:d73c2079-07e1-4465-83eb-5d39a04baf7d nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.710789783 +0000 UTC m=+17.345690083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca") pod "controller-manager-7585c94cb9-9n49k" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d") : configmap "client-ca" not found Feb 16 20:57:22.521681 master-0 kubenswrapper[7926]: I0216 20:57:22.518723 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:22.521681 master-0 kubenswrapper[7926]: E0216 20:57:22.519061 7926 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:22.521681 master-0 kubenswrapper[7926]: E0216 20:57:22.519129 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert podName:e6701f93-e666-4aaf-b1b4-b4464c586a24 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:26.519111815 +0000 UTC m=+18.154012115 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert") pod "route-controller-manager-599565c7b6-fsxd2" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24") : secret "serving-cert" not found Feb 16 20:57:22.521681 master-0 kubenswrapper[7926]: I0216 20:57:22.519133 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:22.521681 master-0 kubenswrapper[7926]: E0216 20:57:22.519384 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:22.521681 master-0 kubenswrapper[7926]: E0216 20:57:22.519478 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca podName:e6701f93-e666-4aaf-b1b4-b4464c586a24 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:26.519449524 +0000 UTC m=+18.154349824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca") pod "route-controller-manager-599565c7b6-fsxd2" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24") : configmap "client-ca" not found Feb 16 20:57:23.894382 master-0 kubenswrapper[7926]: I0216 20:57:23.893956 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-64c454bc85-s4b86"] Feb 16 20:57:23.895637 master-0 kubenswrapper[7926]: I0216 20:57:23.895226 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.898740 master-0 kubenswrapper[7926]: I0216 20:57:23.898691 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Feb 16 20:57:23.898740 master-0 kubenswrapper[7926]: I0216 20:57:23.898731 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 20:57:23.898927 master-0 kubenswrapper[7926]: I0216 20:57:23.898892 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 20:57:23.899165 master-0 kubenswrapper[7926]: I0216 20:57:23.899118 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 20:57:23.899468 master-0 kubenswrapper[7926]: I0216 20:57:23.899443 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 20:57:23.900736 master-0 kubenswrapper[7926]: I0216 20:57:23.900544 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Feb 16 20:57:23.901556 master-0 kubenswrapper[7926]: I0216 20:57:23.901519 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 20:57:23.901874 master-0 kubenswrapper[7926]: I0216 20:57:23.901835 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 20:57:23.904333 master-0 kubenswrapper[7926]: I0216 20:57:23.904305 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 20:57:23.909946 master-0 kubenswrapper[7926]: I0216 20:57:23.909858 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 20:57:23.916639 master-0 kubenswrapper[7926]: I0216 20:57:23.916580 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-64c454bc85-s4b86"] Feb 16 20:57:23.934847 master-0 kubenswrapper[7926]: I0216 20:57:23.934787 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-serving-ca\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.934847 master-0 kubenswrapper[7926]: I0216 20:57:23.934835 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit-dir\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.934847 master-0 kubenswrapper[7926]: I0216 20:57:23.934856 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-client\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.935182 master-0 kubenswrapper[7926]: I0216 20:57:23.935071 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-node-pullsecrets\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.935182 master-0 kubenswrapper[7926]: I0216 20:57:23.935124 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pr8j\" (UniqueName: \"kubernetes.io/projected/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-kube-api-access-5pr8j\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.935262 master-0 kubenswrapper[7926]: I0216 20:57:23.935249 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-encryption-config\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.935306 master-0 kubenswrapper[7926]: I0216 20:57:23.935274 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-config\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.935355 master-0 kubenswrapper[7926]: I0216 20:57:23.935325 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-trusted-ca-bundle\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.935439 master-0 kubenswrapper[7926]: I0216 20:57:23.935384 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-image-import-ca\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.935439 master-0 kubenswrapper[7926]: I0216 20:57:23.935426 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.935502 master-0 kubenswrapper[7926]: I0216 20:57:23.935440 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:23.977920 master-0 kubenswrapper[7926]: I0216 20:57:23.977871 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" event={"ID":"2506c282-0b37-4ece-8a0c-885d0b7f7901","Type":"ContainerStarted","Data":"24435a7f63a96b1a49a7d14efbc7fac8f5f69a776a662db4bff0a9f0d5933f6b"} Feb 16 20:57:24.037466 master-0 kubenswrapper[7926]: I0216 20:57:24.037021 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-encryption-config\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.037466 master-0 kubenswrapper[7926]: I0216 20:57:24.037062 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-config\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.038017 master-0 kubenswrapper[7926]: I0216 20:57:24.037989 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-config\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.038138 master-0 kubenswrapper[7926]: I0216 20:57:24.038097 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-trusted-ca-bundle\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.038211 master-0 kubenswrapper[7926]: I0216 20:57:24.038184 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-image-import-ca\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.038250 master-0 kubenswrapper[7926]: I0216 20:57:24.038225 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.038250 master-0 kubenswrapper[7926]: I0216 20:57:24.038245 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.038331 master-0 kubenswrapper[7926]: I0216 20:57:24.038315 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-serving-ca\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.038383 master-0 kubenswrapper[7926]: E0216 20:57:24.038364 7926 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 20:57:24.038433 master-0 kubenswrapper[7926]: E0216 20:57:24.038422 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert podName:8dcb130d-f6cb-4bf9-99bd-e47adf285dd1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:24.538406666 +0000 UTC m=+16.173306966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert") pod "apiserver-64c454bc85-s4b86" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1") : secret "serving-cert" not found Feb 16 20:57:24.038471 master-0 kubenswrapper[7926]: I0216 20:57:24.038442 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit-dir\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.038471 master-0 kubenswrapper[7926]: I0216 20:57:24.038465 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-client\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.038552 master-0 kubenswrapper[7926]: I0216 20:57:24.038523 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-node-pullsecrets\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.038552 master-0 kubenswrapper[7926]: I0216 20:57:24.038544 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pr8j\" (UniqueName: \"kubernetes.io/projected/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-kube-api-access-5pr8j\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.039024 master-0 kubenswrapper[7926]: I0216 20:57:24.038985 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-serving-ca\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.039024 master-0 kubenswrapper[7926]: I0216 20:57:24.038994 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-node-pullsecrets\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.039143 master-0 kubenswrapper[7926]: I0216 20:57:24.039054 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit-dir\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.039143 master-0 kubenswrapper[7926]: E0216 20:57:24.039119 7926 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 20:57:24.039223 master-0 kubenswrapper[7926]: E0216 20:57:24.039160 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit podName:8dcb130d-f6cb-4bf9-99bd-e47adf285dd1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:24.539146528 +0000 UTC m=+16.174046928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit") pod "apiserver-64c454bc85-s4b86" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1") : configmap "audit-0" not found Feb 16 20:57:24.039489 master-0 kubenswrapper[7926]: I0216 20:57:24.039452 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-trusted-ca-bundle\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.039949 master-0 kubenswrapper[7926]: I0216 20:57:24.039919 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-image-import-ca\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.043079 master-0 kubenswrapper[7926]: I0216 20:57:24.043039 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-encryption-config\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.043149 master-0 kubenswrapper[7926]: I0216 20:57:24.043042 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-client\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.054546 master-0 kubenswrapper[7926]: I0216 20:57:24.054490 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pr8j\" (UniqueName: \"kubernetes.io/projected/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-kube-api-access-5pr8j\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.091487 master-0 kubenswrapper[7926]: I0216 20:57:24.091415 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-llsw4"] Feb 16 20:57:24.091967 master-0 kubenswrapper[7926]: I0216 20:57:24.091933 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.139776 master-0 kubenswrapper[7926]: I0216 20:57:24.139711 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-kubernetes\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.139776 master-0 kubenswrapper[7926]: I0216 20:57:24.139767 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-tuned\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140024 master-0 kubenswrapper[7926]: I0216 20:57:24.139816 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140024 master-0 kubenswrapper[7926]: I0216 20:57:24.139834 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-host\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140024 master-0 kubenswrapper[7926]: I0216 20:57:24.139852 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-modprobe-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140024 master-0 kubenswrapper[7926]: I0216 20:57:24.139869 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-systemd\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140024 master-0 kubenswrapper[7926]: I0216 20:57:24.139884 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-tmp\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140024 master-0 kubenswrapper[7926]: I0216 20:57:24.140013 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-var-lib-kubelet\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140268 master-0 kubenswrapper[7926]: I0216 20:57:24.140049 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdx88\" (UniqueName: \"kubernetes.io/projected/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-kube-api-access-cdx88\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140268 master-0 kubenswrapper[7926]: I0216 20:57:24.140077 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysconfig\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140268 master-0 kubenswrapper[7926]: I0216 20:57:24.140156 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-run\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140268 master-0 kubenswrapper[7926]: I0216 20:57:24.140193 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-conf\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140268 master-0 kubenswrapper[7926]: I0216 20:57:24.140221 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-lib-modules\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.140447 master-0 kubenswrapper[7926]: I0216 20:57:24.140271 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-sys\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.241850 master-0 kubenswrapper[7926]: I0216 20:57:24.241718 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-tmp\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.241850 master-0 kubenswrapper[7926]: I0216 20:57:24.241822 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-var-lib-kubelet\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.241850 master-0 kubenswrapper[7926]: I0216 20:57:24.241841 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdx88\" (UniqueName: \"kubernetes.io/projected/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-kube-api-access-cdx88\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242091 master-0 kubenswrapper[7926]: I0216 20:57:24.241859 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysconfig\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242125 master-0 kubenswrapper[7926]: I0216 20:57:24.242085 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-run\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242164 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysconfig\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242287 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-run\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242283 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-var-lib-kubelet\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242428 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-conf\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242554 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-lib-modules\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242621 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-sys\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242674 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-conf\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242698 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-kubernetes\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242723 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-lib-modules\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242782 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-sys\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242791 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-tuned\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242894 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.242925 master-0 kubenswrapper[7926]: I0216 20:57:24.242919 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-kubernetes\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.243287 master-0 kubenswrapper[7926]: I0216 20:57:24.242927 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-host\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.243287 master-0 kubenswrapper[7926]: I0216 20:57:24.242996 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.243287 master-0 kubenswrapper[7926]: I0216 20:57:24.242964 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-host\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.243287 master-0 kubenswrapper[7926]: I0216 20:57:24.243014 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-modprobe-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.243287 master-0 kubenswrapper[7926]: I0216 20:57:24.243065 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-systemd\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.243287 master-0 kubenswrapper[7926]: I0216 20:57:24.243136 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-modprobe-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.243287 master-0 kubenswrapper[7926]: I0216 20:57:24.243236 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-systemd\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.246000 master-0 kubenswrapper[7926]: I0216 20:57:24.245951 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-tmp\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.246056 master-0 kubenswrapper[7926]: I0216 20:57:24.245959 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-tuned\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.259141 master-0 kubenswrapper[7926]: I0216 20:57:24.259114 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdx88\" (UniqueName: \"kubernetes.io/projected/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-kube-api-access-cdx88\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.421268 master-0 kubenswrapper[7926]: I0216 20:57:24.420871 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 20:57:24.435753 master-0 kubenswrapper[7926]: W0216 20:57:24.435690 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17aaf0e1_e9c7_486c_83fc_47d71f5e1f64.slice/crio-8dea330d1b36a07c27afdc45034426f3e213a02e1b037be44563d4a3b9efc359 WatchSource:0}: Error finding container 8dea330d1b36a07c27afdc45034426f3e213a02e1b037be44563d4a3b9efc359: Status 404 returned error can't find the container with id 8dea330d1b36a07c27afdc45034426f3e213a02e1b037be44563d4a3b9efc359 Feb 16 20:57:24.549080 master-0 kubenswrapper[7926]: I0216 20:57:24.549023 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.549167 master-0 kubenswrapper[7926]: I0216 20:57:24.549095 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:24.549217 master-0 kubenswrapper[7926]: E0216 20:57:24.549201 7926 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 20:57:24.549312 master-0 kubenswrapper[7926]: E0216 20:57:24.549266 7926 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 20:57:24.549385 master-0 kubenswrapper[7926]: E0216 20:57:24.549276 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit podName:8dcb130d-f6cb-4bf9-99bd-e47adf285dd1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.549253903 +0000 UTC m=+17.184154213 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit") pod "apiserver-64c454bc85-s4b86" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1") : configmap "audit-0" not found Feb 16 20:57:24.549439 master-0 kubenswrapper[7926]: E0216 20:57:24.549405 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert podName:8dcb130d-f6cb-4bf9-99bd-e47adf285dd1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:25.549377226 +0000 UTC m=+17.184277526 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert") pod "apiserver-64c454bc85-s4b86" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1") : secret "serving-cert" not found Feb 16 20:57:24.981557 master-0 kubenswrapper[7926]: I0216 20:57:24.981429 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-llsw4" event={"ID":"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64","Type":"ContainerStarted","Data":"5884bfcb6287b88109ccc8e0fa31ce71e568dd6b555e6cc855d0ca5064eb69cf"} Feb 16 20:57:24.981557 master-0 kubenswrapper[7926]: I0216 20:57:24.981480 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-llsw4" event={"ID":"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64","Type":"ContainerStarted","Data":"8dea330d1b36a07c27afdc45034426f3e213a02e1b037be44563d4a3b9efc359"} Feb 16 20:57:24.983770 master-0 kubenswrapper[7926]: I0216 20:57:24.983738 7926 generic.go:334] "Generic (PLEG): container finished" podID="5e062e07-8076-444c-b476-4eb2848e9613" containerID="ee117aab23c2955afe2d46ebc740378a94898d9f452c30c51846fd6b5013569e" exitCode=0 Feb 16 20:57:24.984092 master-0 kubenswrapper[7926]: I0216 20:57:24.984065 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerDied","Data":"ee117aab23c2955afe2d46ebc740378a94898d9f452c30c51846fd6b5013569e"} Feb 16 20:57:24.984312 master-0 kubenswrapper[7926]: I0216 20:57:24.984287 7926 scope.go:117] "RemoveContainer" containerID="ee117aab23c2955afe2d46ebc740378a94898d9f452c30c51846fd6b5013569e" Feb 16 20:57:25.340611 master-0 kubenswrapper[7926]: I0216 20:57:25.340518 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-llsw4" podStartSLOduration=1.340497467 podStartE2EDuration="1.340497467s" podCreationTimestamp="2026-02-16 20:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:25.340022174 +0000 UTC m=+16.974922474" watchObservedRunningTime="2026-02-16 20:57:25.340497467 +0000 UTC m=+16.975397767" Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: I0216 20:57:25.407467 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: I0216 20:57:25.407836 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: I0216 20:57:25.407861 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: I0216 20:57:25.407886 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: I0216 20:57:25.407927 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: I0216 20:57:25.407949 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: I0216 20:57:25.407968 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: I0216 20:57:25.407986 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: I0216 20:57:25.408176 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: I0216 20:57:25.408204 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: E0216 20:57:25.408324 7926 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: E0216 20:57:25.408373 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs podName:1d453639-52ed-4a14-a2ee-02cf9acc2f7c nodeName:}" failed. No retries permitted until 2026-02-16 20:57:41.408358447 +0000 UTC m=+33.043258747 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs") pod "network-metrics-daemon-42bw7" (UID: "1d453639-52ed-4a14-a2ee-02cf9acc2f7c") : secret "metrics-daemon-secret" not found Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: E0216 20:57:25.409190 7926 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:25.409673 master-0 kubenswrapper[7926]: E0216 20:57:25.409285 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls podName:ec7dd4ea-a139-45d4-96a4-506da1567292 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:41.409261363 +0000 UTC m=+33.044161713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-w57zn" (UID: "ec7dd4ea-a139-45d4-96a4-506da1567292") : secret "cluster-monitoring-operator-tls" not found Feb 16 20:57:25.410411 master-0 kubenswrapper[7926]: E0216 20:57:25.409717 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Feb 16 20:57:25.410411 master-0 kubenswrapper[7926]: E0216 20:57:25.409812 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert podName:2e618c5c-52be-4b52-b426-b92555dee9de nodeName:}" failed. No retries permitted until 2026-02-16 20:57:41.409789178 +0000 UTC m=+33.044689538 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert") pod "catalog-operator-588944557d-h7xl6" (UID: "2e618c5c-52be-4b52-b426-b92555dee9de") : secret "catalog-operator-serving-cert" not found Feb 16 20:57:25.410411 master-0 kubenswrapper[7926]: E0216 20:57:25.409890 7926 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 20:57:25.410411 master-0 kubenswrapper[7926]: E0216 20:57:25.409920 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics podName:b28234d1-1d9a-4d9f-9ad1-e3c682bed492 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:41.409910352 +0000 UTC m=+33.044810742 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-6rmhq" (UID: "b28234d1-1d9a-4d9f-9ad1-e3c682bed492") : secret "marketplace-operator-metrics" not found Feb 16 20:57:25.410411 master-0 kubenswrapper[7926]: E0216 20:57:25.409948 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 20:57:25.410411 master-0 kubenswrapper[7926]: E0216 20:57:25.410009 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert podName:4b035e85-b2b0-4dee-bb86-3465fc4b98a8 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:41.409994784 +0000 UTC m=+33.044895174 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-9m94g" (UID: "4b035e85-b2b0-4dee-bb86-3465fc4b98a8") : secret "package-server-manager-serving-cert" not found Feb 16 20:57:25.410411 master-0 kubenswrapper[7926]: E0216 20:57:25.410026 7926 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 20:57:25.410411 master-0 kubenswrapper[7926]: E0216 20:57:25.410090 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 20:57:41.410076807 +0000 UTC m=+33.044977177 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : secret "multus-admission-controller-secret" not found Feb 16 20:57:25.410411 master-0 kubenswrapper[7926]: E0216 20:57:25.410129 7926 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Feb 16 20:57:25.410411 master-0 kubenswrapper[7926]: E0216 20:57:25.410187 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert podName:a4c9b781-14c0-469c-bb9e-0c3982a04520 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:41.410173799 +0000 UTC m=+33.045074179 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert") pod "olm-operator-6b56bd877c-vlhvq" (UID: "a4c9b781-14c0-469c-bb9e-0c3982a04520") : secret "olm-operator-serving-cert" not found Feb 16 20:57:25.414044 master-0 kubenswrapper[7926]: I0216 20:57:25.414002 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:25.414251 master-0 kubenswrapper[7926]: I0216 20:57:25.414217 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:25.436576 master-0 kubenswrapper[7926]: I0216 20:57:25.436515 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:25.455079 master-0 kubenswrapper[7926]: I0216 20:57:25.455014 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 20:57:25.456024 master-0 kubenswrapper[7926]: I0216 20:57:25.455913 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 20:57:25.459116 master-0 kubenswrapper[7926]: I0216 20:57:25.459086 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 20:57:25.610669 master-0 kubenswrapper[7926]: I0216 20:57:25.610603 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:25.610774 master-0 kubenswrapper[7926]: I0216 20:57:25.610673 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:25.610938 master-0 kubenswrapper[7926]: E0216 20:57:25.610880 7926 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 20:57:25.611032 master-0 kubenswrapper[7926]: E0216 20:57:25.611004 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert podName:8dcb130d-f6cb-4bf9-99bd-e47adf285dd1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:27.610977481 +0000 UTC m=+19.245877781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert") pod "apiserver-64c454bc85-s4b86" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1") : secret "serving-cert" not found Feb 16 20:57:25.611375 master-0 kubenswrapper[7926]: E0216 20:57:25.611338 7926 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 20:57:25.611375 master-0 kubenswrapper[7926]: E0216 20:57:25.611371 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit podName:8dcb130d-f6cb-4bf9-99bd-e47adf285dd1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:27.611364741 +0000 UTC m=+19.246265041 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit") pod "apiserver-64c454bc85-s4b86" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1") : configmap "audit-0" not found Feb 16 20:57:25.712978 master-0 kubenswrapper[7926]: I0216 20:57:25.712884 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:25.713255 master-0 kubenswrapper[7926]: E0216 20:57:25.713077 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:25.713255 master-0 kubenswrapper[7926]: E0216 20:57:25.713211 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca podName:d73c2079-07e1-4465-83eb-5d39a04baf7d nodeName:}" failed. No retries permitted until 2026-02-16 20:57:33.713178103 +0000 UTC m=+25.348078403 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca") pod "controller-manager-7585c94cb9-9n49k" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d") : configmap "client-ca" not found Feb 16 20:57:25.991533 master-0 kubenswrapper[7926]: I0216 20:57:25.991423 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerStarted","Data":"8d6fd2d30a1b00edfb997113793ad55fbf5dca8c4b949fed22018dbb444c09ad"} Feb 16 20:57:26.001105 master-0 kubenswrapper[7926]: I0216 20:57:25.999605 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb"] Feb 16 20:57:26.001899 master-0 kubenswrapper[7926]: I0216 20:57:26.001850 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d"] Feb 16 20:57:26.001973 master-0 kubenswrapper[7926]: I0216 20:57:26.001929 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-cdltb"] Feb 16 20:57:26.023310 master-0 kubenswrapper[7926]: W0216 20:57:26.023255 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcef33294_81fb_41a2_811d_2565f94514d1.slice/crio-99134c6775f2c1522a1480fdf36e455e0ea6704e4324711468efadafd1a4b744 WatchSource:0}: Error finding container 99134c6775f2c1522a1480fdf36e455e0ea6704e4324711468efadafd1a4b744: Status 404 returned error can't find the container with id 99134c6775f2c1522a1480fdf36e455e0ea6704e4324711468efadafd1a4b744 Feb 16 20:57:26.024841 master-0 kubenswrapper[7926]: W0216 20:57:26.024221 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod456d6155_7e1c_48d5_a3b3_4ec3bac6cdcd.slice/crio-cedd6b186b2f683612167b71883ce9d5bac09eb1edd2f0cb1e7e8286188d3035 WatchSource:0}: Error finding container cedd6b186b2f683612167b71883ce9d5bac09eb1edd2f0cb1e7e8286188d3035: Status 404 returned error can't find the container with id cedd6b186b2f683612167b71883ce9d5bac09eb1edd2f0cb1e7e8286188d3035 Feb 16 20:57:26.672146 master-0 kubenswrapper[7926]: I0216 20:57:26.671661 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:26.672716 master-0 kubenswrapper[7926]: I0216 20:57:26.672196 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert\") pod \"route-controller-manager-599565c7b6-fsxd2\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:26.672716 master-0 kubenswrapper[7926]: E0216 20:57:26.672592 7926 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 20:57:26.672716 master-0 kubenswrapper[7926]: E0216 20:57:26.672686 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert podName:e6701f93-e666-4aaf-b1b4-b4464c586a24 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:34.672664677 +0000 UTC m=+26.307564987 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert") pod "route-controller-manager-599565c7b6-fsxd2" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24") : secret "serving-cert" not found Feb 16 20:57:26.673941 master-0 kubenswrapper[7926]: E0216 20:57:26.673917 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:26.674022 master-0 kubenswrapper[7926]: E0216 20:57:26.673971 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca podName:e6701f93-e666-4aaf-b1b4-b4464c586a24 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:34.673958114 +0000 UTC m=+26.308858414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca") pod "route-controller-manager-599565c7b6-fsxd2" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24") : configmap "client-ca" not found Feb 16 20:57:26.995786 master-0 kubenswrapper[7926]: I0216 20:57:26.995623 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" event={"ID":"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd","Type":"ContainerStarted","Data":"cedd6b186b2f683612167b71883ce9d5bac09eb1edd2f0cb1e7e8286188d3035"} Feb 16 20:57:26.996568 master-0 kubenswrapper[7926]: I0216 20:57:26.996508 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" event={"ID":"9e0227bc-63f5-48be-95dc-1323a2b2e327","Type":"ContainerStarted","Data":"0855efbb779255fb187bac22b944f8f2035fd58838e6517844db44571c397aae"} Feb 16 20:57:26.997546 master-0 kubenswrapper[7926]: I0216 20:57:26.997519 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"99134c6775f2c1522a1480fdf36e455e0ea6704e4324711468efadafd1a4b744"} Feb 16 20:57:27.679428 master-0 kubenswrapper[7926]: I0216 20:57:27.678744 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-64c454bc85-s4b86"] Feb 16 20:57:27.679428 master-0 kubenswrapper[7926]: E0216 20:57:27.679118 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-64c454bc85-s4b86" podUID="8dcb130d-f6cb-4bf9-99bd-e47adf285dd1" Feb 16 20:57:27.694684 master-0 kubenswrapper[7926]: I0216 20:57:27.684433 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:27.694684 master-0 kubenswrapper[7926]: I0216 20:57:27.684496 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit\") pod \"apiserver-64c454bc85-s4b86\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:27.694684 master-0 kubenswrapper[7926]: E0216 20:57:27.684711 7926 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 20:57:27.694684 master-0 kubenswrapper[7926]: E0216 20:57:27.684800 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert podName:8dcb130d-f6cb-4bf9-99bd-e47adf285dd1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:31.684776537 +0000 UTC m=+23.319676897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert") pod "apiserver-64c454bc85-s4b86" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1") : secret "serving-cert" not found Feb 16 20:57:27.694684 master-0 kubenswrapper[7926]: E0216 20:57:27.685118 7926 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 20:57:27.694684 master-0 kubenswrapper[7926]: E0216 20:57:27.685190 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit podName:8dcb130d-f6cb-4bf9-99bd-e47adf285dd1 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:31.685167819 +0000 UTC m=+23.320068169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit") pod "apiserver-64c454bc85-s4b86" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1") : configmap "audit-0" not found Feb 16 20:57:28.001575 master-0 kubenswrapper[7926]: I0216 20:57:28.001503 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:28.009097 master-0 kubenswrapper[7926]: I0216 20:57:28.009051 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:28.192301 master-0 kubenswrapper[7926]: I0216 20:57:28.192242 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pr8j\" (UniqueName: \"kubernetes.io/projected/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-kube-api-access-5pr8j\") pod \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " Feb 16 20:57:28.192301 master-0 kubenswrapper[7926]: I0216 20:57:28.192292 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-serving-ca\") pod \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " Feb 16 20:57:28.192301 master-0 kubenswrapper[7926]: I0216 20:57:28.192312 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-client\") pod \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " Feb 16 20:57:28.192627 master-0 kubenswrapper[7926]: I0216 20:57:28.192332 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-node-pullsecrets\") pod \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " Feb 16 20:57:28.192627 master-0 kubenswrapper[7926]: I0216 20:57:28.192358 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-config\") pod \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " Feb 16 20:57:28.192627 master-0 kubenswrapper[7926]: I0216 20:57:28.192382 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-trusted-ca-bundle\") pod \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " Feb 16 20:57:28.192627 master-0 kubenswrapper[7926]: I0216 20:57:28.192421 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-encryption-config\") pod \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " Feb 16 20:57:28.192627 master-0 kubenswrapper[7926]: I0216 20:57:28.192471 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit-dir\") pod \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " Feb 16 20:57:28.192627 master-0 kubenswrapper[7926]: I0216 20:57:28.192489 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-image-import-ca\") pod \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\" (UID: \"8dcb130d-f6cb-4bf9-99bd-e47adf285dd1\") " Feb 16 20:57:28.192959 master-0 kubenswrapper[7926]: I0216 20:57:28.192893 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:57:28.193274 master-0 kubenswrapper[7926]: I0216 20:57:28.193237 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:28.193274 master-0 kubenswrapper[7926]: I0216 20:57:28.193253 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:28.193359 master-0 kubenswrapper[7926]: I0216 20:57:28.193298 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:57:28.193359 master-0 kubenswrapper[7926]: I0216 20:57:28.193296 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:28.194209 master-0 kubenswrapper[7926]: I0216 20:57:28.194160 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-config" (OuterVolumeSpecName: "config") pod "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:28.209961 master-0 kubenswrapper[7926]: I0216 20:57:28.209892 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-kube-api-access-5pr8j" (OuterVolumeSpecName: "kube-api-access-5pr8j") pod "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1"). InnerVolumeSpecName "kube-api-access-5pr8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:57:28.209961 master-0 kubenswrapper[7926]: I0216 20:57:28.209928 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:57:28.210083 master-0 kubenswrapper[7926]: I0216 20:57:28.209965 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1" (UID: "8dcb130d-f6cb-4bf9-99bd-e47adf285dd1"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:57:28.294461 master-0 kubenswrapper[7926]: I0216 20:57:28.294340 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pr8j\" (UniqueName: \"kubernetes.io/projected/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-kube-api-access-5pr8j\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:28.294461 master-0 kubenswrapper[7926]: I0216 20:57:28.294385 7926 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:28.294461 master-0 kubenswrapper[7926]: I0216 20:57:28.294398 7926 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:28.294461 master-0 kubenswrapper[7926]: I0216 20:57:28.294410 7926 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:28.294461 master-0 kubenswrapper[7926]: I0216 20:57:28.294422 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-config\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:28.294461 master-0 kubenswrapper[7926]: I0216 20:57:28.294434 7926 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:28.294461 master-0 kubenswrapper[7926]: I0216 20:57:28.294446 7926 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:28.294461 master-0 kubenswrapper[7926]: I0216 20:57:28.294459 7926 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:28.294461 master-0 kubenswrapper[7926]: I0216 20:57:28.294474 7926 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:29.004782 master-0 kubenswrapper[7926]: I0216 20:57:29.003917 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-64c454bc85-s4b86" Feb 16 20:57:29.040027 master-0 kubenswrapper[7926]: I0216 20:57:29.039213 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6bdb76b9b7-z46x6"] Feb 16 20:57:29.040268 master-0 kubenswrapper[7926]: I0216 20:57:29.040247 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.042745 master-0 kubenswrapper[7926]: I0216 20:57:29.042717 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 20:57:29.043016 master-0 kubenswrapper[7926]: I0216 20:57:29.042999 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 20:57:29.044083 master-0 kubenswrapper[7926]: I0216 20:57:29.044037 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-64c454bc85-s4b86"] Feb 16 20:57:29.044634 master-0 kubenswrapper[7926]: I0216 20:57:29.044529 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 20:57:29.044963 master-0 kubenswrapper[7926]: I0216 20:57:29.044723 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 20:57:29.045059 master-0 kubenswrapper[7926]: I0216 20:57:29.045033 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 20:57:29.045495 master-0 kubenswrapper[7926]: I0216 20:57:29.045450 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-64c454bc85-s4b86"] Feb 16 20:57:29.046796 master-0 kubenswrapper[7926]: I0216 20:57:29.046764 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 20:57:29.046796 master-0 kubenswrapper[7926]: I0216 20:57:29.046769 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 20:57:29.046951 master-0 kubenswrapper[7926]: I0216 20:57:29.046840 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 20:57:29.046951 master-0 kubenswrapper[7926]: I0216 20:57:29.046770 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6bdb76b9b7-z46x6"] Feb 16 20:57:29.047208 master-0 kubenswrapper[7926]: I0216 20:57:29.047149 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 20:57:29.049427 master-0 kubenswrapper[7926]: I0216 20:57:29.049393 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 20:57:29.209994 master-0 kubenswrapper[7926]: I0216 20:57:29.209933 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-encryption-config\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.210260 master-0 kubenswrapper[7926]: I0216 20:57:29.210020 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-trusted-ca-bundle\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.210260 master-0 kubenswrapper[7926]: I0216 20:57:29.210052 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.210260 master-0 kubenswrapper[7926]: I0216 20:57:29.210087 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-serving-ca\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.210260 master-0 kubenswrapper[7926]: I0216 20:57:29.210106 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit-dir\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.210260 master-0 kubenswrapper[7926]: I0216 20:57:29.210127 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4djt\" (UniqueName: \"kubernetes.io/projected/d2501eec-47c8-47bc-b0c9-28d94c06075b-kube-api-access-x4djt\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.210260 master-0 kubenswrapper[7926]: I0216 20:57:29.210159 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.210260 master-0 kubenswrapper[7926]: I0216 20:57:29.210186 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-node-pullsecrets\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.210260 master-0 kubenswrapper[7926]: I0216 20:57:29.210227 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-client\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.210478 master-0 kubenswrapper[7926]: I0216 20:57:29.210274 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-config\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.210478 master-0 kubenswrapper[7926]: I0216 20:57:29.210329 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-image-import-ca\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.210478 master-0 kubenswrapper[7926]: I0216 20:57:29.210365 7926 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:29.210478 master-0 kubenswrapper[7926]: I0216 20:57:29.210377 7926 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1-audit\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:29.311964 master-0 kubenswrapper[7926]: I0216 20:57:29.311827 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit-dir\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.311964 master-0 kubenswrapper[7926]: I0216 20:57:29.311882 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4djt\" (UniqueName: \"kubernetes.io/projected/d2501eec-47c8-47bc-b0c9-28d94c06075b-kube-api-access-x4djt\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.312191 master-0 kubenswrapper[7926]: I0216 20:57:29.311979 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit-dir\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.312191 master-0 kubenswrapper[7926]: I0216 20:57:29.312053 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.312191 master-0 kubenswrapper[7926]: I0216 20:57:29.312092 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-node-pullsecrets\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.312191 master-0 kubenswrapper[7926]: I0216 20:57:29.312149 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-client\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.312191 master-0 kubenswrapper[7926]: I0216 20:57:29.312186 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-config\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.312331 master-0 kubenswrapper[7926]: I0216 20:57:29.312208 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-node-pullsecrets\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.312367 master-0 kubenswrapper[7926]: I0216 20:57:29.312338 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-image-import-ca\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.312400 master-0 kubenswrapper[7926]: I0216 20:57:29.312367 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-encryption-config\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.312941 master-0 kubenswrapper[7926]: I0216 20:57:29.312889 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.312941 master-0 kubenswrapper[7926]: I0216 20:57:29.312938 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-trusted-ca-bundle\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.313298 master-0 kubenswrapper[7926]: E0216 20:57:29.313234 7926 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 20:57:29.313385 master-0 kubenswrapper[7926]: E0216 20:57:29.313362 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert podName:d2501eec-47c8-47bc-b0c9-28d94c06075b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:29.813330222 +0000 UTC m=+21.448230562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert") pod "apiserver-6bdb76b9b7-z46x6" (UID: "d2501eec-47c8-47bc-b0c9-28d94c06075b") : secret "serving-cert" not found Feb 16 20:57:29.313564 master-0 kubenswrapper[7926]: I0216 20:57:29.313496 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-serving-ca\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.315330 master-0 kubenswrapper[7926]: I0216 20:57:29.315017 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.315330 master-0 kubenswrapper[7926]: I0216 20:57:29.315268 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-serving-ca\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.315459 master-0 kubenswrapper[7926]: I0216 20:57:29.315429 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-trusted-ca-bundle\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.318499 master-0 kubenswrapper[7926]: I0216 20:57:29.318455 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-image-import-ca\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.318589 master-0 kubenswrapper[7926]: I0216 20:57:29.318468 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-config\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.330462 master-0 kubenswrapper[7926]: I0216 20:57:29.328017 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-encryption-config\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.330462 master-0 kubenswrapper[7926]: I0216 20:57:29.328392 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-client\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.335102 master-0 kubenswrapper[7926]: I0216 20:57:29.332157 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4djt\" (UniqueName: \"kubernetes.io/projected/d2501eec-47c8-47bc-b0c9-28d94c06075b-kube-api-access-x4djt\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.824616 master-0 kubenswrapper[7926]: I0216 20:57:29.823381 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:29.824616 master-0 kubenswrapper[7926]: E0216 20:57:29.823787 7926 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 20:57:29.824616 master-0 kubenswrapper[7926]: E0216 20:57:29.823942 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert podName:d2501eec-47c8-47bc-b0c9-28d94c06075b nodeName:}" failed. No retries permitted until 2026-02-16 20:57:30.823880681 +0000 UTC m=+22.458780991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert") pod "apiserver-6bdb76b9b7-z46x6" (UID: "d2501eec-47c8-47bc-b0c9-28d94c06075b") : secret "serving-cert" not found Feb 16 20:57:29.832500 master-0 kubenswrapper[7926]: I0216 20:57:29.832437 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 20:57:29.839019 master-0 kubenswrapper[7926]: I0216 20:57:29.838965 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 20:57:29.841046 master-0 kubenswrapper[7926]: I0216 20:57:29.839351 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:57:29.842625 master-0 kubenswrapper[7926]: I0216 20:57:29.842577 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 16 20:57:29.924510 master-0 kubenswrapper[7926]: I0216 20:57:29.924439 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-var-lock\") pod \"installer-1-master-0\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:57:29.924764 master-0 kubenswrapper[7926]: I0216 20:57:29.924537 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:57:29.924764 master-0 kubenswrapper[7926]: I0216 20:57:29.924678 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:57:30.026566 master-0 kubenswrapper[7926]: I0216 20:57:30.026316 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:57:30.026566 master-0 kubenswrapper[7926]: I0216 20:57:30.026523 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:57:30.028724 master-0 kubenswrapper[7926]: I0216 20:57:30.026932 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:57:30.028724 master-0 kubenswrapper[7926]: I0216 20:57:30.026990 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-var-lock\") pod \"installer-1-master-0\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:57:30.028724 master-0 kubenswrapper[7926]: I0216 20:57:30.027186 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-var-lock\") pod \"installer-1-master-0\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:57:30.056205 master-0 kubenswrapper[7926]: I0216 20:57:30.056110 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:57:30.162206 master-0 kubenswrapper[7926]: I0216 20:57:30.162143 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:57:30.567600 master-0 kubenswrapper[7926]: I0216 20:57:30.567533 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 20:57:30.744913 master-0 kubenswrapper[7926]: I0216 20:57:30.744809 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dcb130d-f6cb-4bf9-99bd-e47adf285dd1" path="/var/lib/kubelet/pods/8dcb130d-f6cb-4bf9-99bd-e47adf285dd1/volumes" Feb 16 20:57:30.844765 master-0 kubenswrapper[7926]: I0216 20:57:30.844707 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:30.847835 master-0 kubenswrapper[7926]: I0216 20:57:30.847804 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:30.864612 master-0 kubenswrapper[7926]: I0216 20:57:30.864189 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:31.016141 master-0 kubenswrapper[7926]: I0216 20:57:31.016040 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb","Type":"ContainerStarted","Data":"4d5f546c2421eec3805ff12860007eff73909bb7626878d72e7e0b55753734ca"} Feb 16 20:57:31.016141 master-0 kubenswrapper[7926]: I0216 20:57:31.016103 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb","Type":"ContainerStarted","Data":"3ba1bee73a0e81eaff571d4e985ca295a9b1e963b6ed0e932ac596130ee5ae9e"} Feb 16 20:57:31.017503 master-0 kubenswrapper[7926]: I0216 20:57:31.017429 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" event={"ID":"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd","Type":"ContainerStarted","Data":"2386de6d7e3957c25a5bbdd2f9defa96eb2766f1baca6f041fdfd46d769c8ff9"} Feb 16 20:57:31.017503 master-0 kubenswrapper[7926]: I0216 20:57:31.017476 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" event={"ID":"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd","Type":"ContainerStarted","Data":"75dd3f7c4a14726f013a3bf4f169a8056c56991ba5e679317594055334246207"} Feb 16 20:57:31.018953 master-0 kubenswrapper[7926]: I0216 20:57:31.018916 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" event={"ID":"9e0227bc-63f5-48be-95dc-1323a2b2e327","Type":"ContainerStarted","Data":"a7330b931340d1be5dba0fd54e8b246009c00f6e813142a46ee5264b4ff67461"} Feb 16 20:57:31.021025 master-0 kubenswrapper[7926]: I0216 20:57:31.020991 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"d15df1caa93fcce85a632cd318aaf9104964d846efd2e5a897c570b4ebb61cb3"} Feb 16 20:57:31.021025 master-0 kubenswrapper[7926]: I0216 20:57:31.021022 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"2b191efabecfa6e89d563189d25950b732d83b54240d68732d9bfb22ddbb8e4f"} Feb 16 20:57:31.030828 master-0 kubenswrapper[7926]: I0216 20:57:31.030716 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=2.030701487 podStartE2EDuration="2.030701487s" podCreationTimestamp="2026-02-16 20:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:31.028852835 +0000 UTC m=+22.663753135" watchObservedRunningTime="2026-02-16 20:57:31.030701487 +0000 UTC m=+22.665601797" Feb 16 20:57:31.145740 master-0 kubenswrapper[7926]: I0216 20:57:31.144770 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-7bbrn"] Feb 16 20:57:31.145740 master-0 kubenswrapper[7926]: I0216 20:57:31.145606 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:31.147512 master-0 kubenswrapper[7926]: I0216 20:57:31.147404 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 20:57:31.147634 master-0 kubenswrapper[7926]: I0216 20:57:31.147420 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 20:57:31.148075 master-0 kubenswrapper[7926]: I0216 20:57:31.147867 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 20:57:31.148162 master-0 kubenswrapper[7926]: I0216 20:57:31.148148 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 20:57:31.148760 master-0 kubenswrapper[7926]: I0216 20:57:31.148714 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7bbrn"] Feb 16 20:57:31.242131 master-0 kubenswrapper[7926]: I0216 20:57:31.242077 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6bdb76b9b7-z46x6"] Feb 16 20:57:31.251494 master-0 kubenswrapper[7926]: I0216 20:57:31.251446 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-config-volume\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:31.251714 master-0 kubenswrapper[7926]: I0216 20:57:31.251562 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:31.251714 master-0 kubenswrapper[7926]: I0216 20:57:31.251635 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64qvl\" (UniqueName: \"kubernetes.io/projected/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-kube-api-access-64qvl\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:31.353702 master-0 kubenswrapper[7926]: I0216 20:57:31.352824 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:31.353702 master-0 kubenswrapper[7926]: I0216 20:57:31.352899 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64qvl\" (UniqueName: \"kubernetes.io/projected/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-kube-api-access-64qvl\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:31.353702 master-0 kubenswrapper[7926]: I0216 20:57:31.352933 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-config-volume\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:31.353702 master-0 kubenswrapper[7926]: I0216 20:57:31.353561 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-config-volume\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:31.353702 master-0 kubenswrapper[7926]: E0216 20:57:31.353644 7926 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 16 20:57:31.353702 master-0 kubenswrapper[7926]: E0216 20:57:31.353698 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls podName:2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf nodeName:}" failed. No retries permitted until 2026-02-16 20:57:31.853684843 +0000 UTC m=+23.488585143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls") pod "dns-default-7bbrn" (UID: "2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf") : secret "dns-default-metrics-tls" not found Feb 16 20:57:31.376139 master-0 kubenswrapper[7926]: I0216 20:57:31.376085 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64qvl\" (UniqueName: \"kubernetes.io/projected/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-kube-api-access-64qvl\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:31.482776 master-0 kubenswrapper[7926]: I0216 20:57:31.482581 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-zfldn"] Feb 16 20:57:31.483147 master-0 kubenswrapper[7926]: I0216 20:57:31.483108 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zfldn" Feb 16 20:57:31.555676 master-0 kubenswrapper[7926]: I0216 20:57:31.555575 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzm2t\" (UniqueName: \"kubernetes.io/projected/34743ce3-5eda-4c60-99cb-640dd067ebdf-kube-api-access-vzm2t\") pod \"node-resolver-zfldn\" (UID: \"34743ce3-5eda-4c60-99cb-640dd067ebdf\") " pod="openshift-dns/node-resolver-zfldn" Feb 16 20:57:31.555676 master-0 kubenswrapper[7926]: I0216 20:57:31.555686 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34743ce3-5eda-4c60-99cb-640dd067ebdf-hosts-file\") pod \"node-resolver-zfldn\" (UID: \"34743ce3-5eda-4c60-99cb-640dd067ebdf\") " pod="openshift-dns/node-resolver-zfldn" Feb 16 20:57:31.657144 master-0 kubenswrapper[7926]: I0216 20:57:31.657053 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzm2t\" (UniqueName: \"kubernetes.io/projected/34743ce3-5eda-4c60-99cb-640dd067ebdf-kube-api-access-vzm2t\") pod \"node-resolver-zfldn\" (UID: \"34743ce3-5eda-4c60-99cb-640dd067ebdf\") " pod="openshift-dns/node-resolver-zfldn" Feb 16 20:57:31.657428 master-0 kubenswrapper[7926]: I0216 20:57:31.657190 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34743ce3-5eda-4c60-99cb-640dd067ebdf-hosts-file\") pod \"node-resolver-zfldn\" (UID: \"34743ce3-5eda-4c60-99cb-640dd067ebdf\") " pod="openshift-dns/node-resolver-zfldn" Feb 16 20:57:31.657428 master-0 kubenswrapper[7926]: I0216 20:57:31.657359 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34743ce3-5eda-4c60-99cb-640dd067ebdf-hosts-file\") pod \"node-resolver-zfldn\" (UID: \"34743ce3-5eda-4c60-99cb-640dd067ebdf\") " pod="openshift-dns/node-resolver-zfldn" Feb 16 20:57:31.858734 master-0 kubenswrapper[7926]: I0216 20:57:31.858630 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:31.858987 master-0 kubenswrapper[7926]: E0216 20:57:31.858810 7926 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 16 20:57:31.858987 master-0 kubenswrapper[7926]: E0216 20:57:31.858871 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls podName:2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf nodeName:}" failed. No retries permitted until 2026-02-16 20:57:32.858854836 +0000 UTC m=+24.493755136 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls") pod "dns-default-7bbrn" (UID: "2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf") : secret "dns-default-metrics-tls" not found Feb 16 20:57:32.025288 master-0 kubenswrapper[7926]: I0216 20:57:32.025224 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" event={"ID":"d2501eec-47c8-47bc-b0c9-28d94c06075b","Type":"ContainerStarted","Data":"db0925be9adc52361772ef921815ff9b0ca5417617347a7d9e8f0049e699014a"} Feb 16 20:57:32.443378 master-0 kubenswrapper[7926]: I0216 20:57:32.443328 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzm2t\" (UniqueName: \"kubernetes.io/projected/34743ce3-5eda-4c60-99cb-640dd067ebdf-kube-api-access-vzm2t\") pod \"node-resolver-zfldn\" (UID: \"34743ce3-5eda-4c60-99cb-640dd067ebdf\") " pod="openshift-dns/node-resolver-zfldn" Feb 16 20:57:32.610006 master-0 kubenswrapper[7926]: I0216 20:57:32.609940 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7585c94cb9-9n49k"] Feb 16 20:57:32.611694 master-0 kubenswrapper[7926]: E0216 20:57:32.610282 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" podUID="d73c2079-07e1-4465-83eb-5d39a04baf7d" Feb 16 20:57:32.620625 master-0 kubenswrapper[7926]: I0216 20:57:32.620556 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2"] Feb 16 20:57:32.620978 master-0 kubenswrapper[7926]: E0216 20:57:32.620941 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" podUID="e6701f93-e666-4aaf-b1b4-b4464c586a24" Feb 16 20:57:32.718311 master-0 kubenswrapper[7926]: I0216 20:57:32.718174 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zfldn" Feb 16 20:57:32.733484 master-0 kubenswrapper[7926]: W0216 20:57:32.733395 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34743ce3_5eda_4c60_99cb_640dd067ebdf.slice/crio-cb7c3bcdaae372d84aa4e8a539ce094d23c02279631a56da69b150d86b62b5a5 WatchSource:0}: Error finding container cb7c3bcdaae372d84aa4e8a539ce094d23c02279631a56da69b150d86b62b5a5: Status 404 returned error can't find the container with id cb7c3bcdaae372d84aa4e8a539ce094d23c02279631a56da69b150d86b62b5a5 Feb 16 20:57:32.872527 master-0 kubenswrapper[7926]: I0216 20:57:32.872427 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:32.878445 master-0 kubenswrapper[7926]: I0216 20:57:32.878374 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:32.887578 master-0 kubenswrapper[7926]: I0216 20:57:32.886865 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 20:57:32.887578 master-0 kubenswrapper[7926]: I0216 20:57:32.887444 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 20:57:32.889212 master-0 kubenswrapper[7926]: I0216 20:57:32.889177 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 16 20:57:32.893926 master-0 kubenswrapper[7926]: I0216 20:57:32.893812 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 20:57:32.968871 master-0 kubenswrapper[7926]: I0216 20:57:32.968739 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:32.974140 master-0 kubenswrapper[7926]: I0216 20:57:32.974106 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " pod="openshift-etcd/installer-1-master-0" Feb 16 20:57:32.974229 master-0 kubenswrapper[7926]: I0216 20:57:32.974202 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-var-lock\") pod \"installer-1-master-0\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " pod="openshift-etcd/installer-1-master-0" Feb 16 20:57:32.974281 master-0 kubenswrapper[7926]: I0216 20:57:32.974236 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d416d98-ee7c-4481-9721-861ccd91685d-kube-api-access\") pod \"installer-1-master-0\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " pod="openshift-etcd/installer-1-master-0" Feb 16 20:57:33.037317 master-0 kubenswrapper[7926]: I0216 20:57:33.037244 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zfldn" event={"ID":"34743ce3-5eda-4c60-99cb-640dd067ebdf","Type":"ContainerStarted","Data":"43f0dafaf40b3911a88955e81edf78115668a44abe374303b3f2243aa138791a"} Feb 16 20:57:33.037317 master-0 kubenswrapper[7926]: I0216 20:57:33.037262 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:33.037317 master-0 kubenswrapper[7926]: I0216 20:57:33.037305 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zfldn" event={"ID":"34743ce3-5eda-4c60-99cb-640dd067ebdf","Type":"ContainerStarted","Data":"cb7c3bcdaae372d84aa4e8a539ce094d23c02279631a56da69b150d86b62b5a5"} Feb 16 20:57:33.037635 master-0 kubenswrapper[7926]: I0216 20:57:33.037321 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:33.050475 master-0 kubenswrapper[7926]: I0216 20:57:33.050435 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:33.053486 master-0 kubenswrapper[7926]: I0216 20:57:33.053456 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:33.075212 master-0 kubenswrapper[7926]: I0216 20:57:33.075173 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4k59\" (UniqueName: \"kubernetes.io/projected/e6701f93-e666-4aaf-b1b4-b4464c586a24-kube-api-access-l4k59\") pod \"e6701f93-e666-4aaf-b1b4-b4464c586a24\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " Feb 16 20:57:33.075408 master-0 kubenswrapper[7926]: I0216 20:57:33.075274 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-config\") pod \"e6701f93-e666-4aaf-b1b4-b4464c586a24\" (UID: \"e6701f93-e666-4aaf-b1b4-b4464c586a24\") " Feb 16 20:57:33.075447 master-0 kubenswrapper[7926]: I0216 20:57:33.075419 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-var-lock\") pod \"installer-1-master-0\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " pod="openshift-etcd/installer-1-master-0" Feb 16 20:57:33.075479 master-0 kubenswrapper[7926]: I0216 20:57:33.075458 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d416d98-ee7c-4481-9721-861ccd91685d-kube-api-access\") pod \"installer-1-master-0\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " pod="openshift-etcd/installer-1-master-0" Feb 16 20:57:33.076220 master-0 kubenswrapper[7926]: I0216 20:57:33.075561 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " pod="openshift-etcd/installer-1-master-0" Feb 16 20:57:33.076220 master-0 kubenswrapper[7926]: I0216 20:57:33.075639 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-var-lock\") pod \"installer-1-master-0\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " pod="openshift-etcd/installer-1-master-0" Feb 16 20:57:33.076220 master-0 kubenswrapper[7926]: I0216 20:57:33.075949 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " pod="openshift-etcd/installer-1-master-0" Feb 16 20:57:33.076633 master-0 kubenswrapper[7926]: I0216 20:57:33.076570 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-config" (OuterVolumeSpecName: "config") pod "e6701f93-e666-4aaf-b1b4-b4464c586a24" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:33.082957 master-0 kubenswrapper[7926]: I0216 20:57:33.082898 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6701f93-e666-4aaf-b1b4-b4464c586a24-kube-api-access-l4k59" (OuterVolumeSpecName: "kube-api-access-l4k59") pod "e6701f93-e666-4aaf-b1b4-b4464c586a24" (UID: "e6701f93-e666-4aaf-b1b4-b4464c586a24"). InnerVolumeSpecName "kube-api-access-l4k59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:57:33.115581 master-0 kubenswrapper[7926]: I0216 20:57:33.115439 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zfldn" podStartSLOduration=2.115412806 podStartE2EDuration="2.115412806s" podCreationTimestamp="2026-02-16 20:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:33.097952366 +0000 UTC m=+24.732852666" watchObservedRunningTime="2026-02-16 20:57:33.115412806 +0000 UTC m=+24.750313106" Feb 16 20:57:33.170687 master-0 kubenswrapper[7926]: I0216 20:57:33.169923 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d416d98-ee7c-4481-9721-861ccd91685d-kube-api-access\") pod \"installer-1-master-0\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " pod="openshift-etcd/installer-1-master-0" Feb 16 20:57:33.176836 master-0 kubenswrapper[7926]: I0216 20:57:33.176779 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scd4p\" (UniqueName: \"kubernetes.io/projected/d73c2079-07e1-4465-83eb-5d39a04baf7d-kube-api-access-scd4p\") pod \"d73c2079-07e1-4465-83eb-5d39a04baf7d\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " Feb 16 20:57:33.176939 master-0 kubenswrapper[7926]: I0216 20:57:33.176840 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert\") pod \"d73c2079-07e1-4465-83eb-5d39a04baf7d\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " Feb 16 20:57:33.176939 master-0 kubenswrapper[7926]: I0216 20:57:33.176897 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-config\") pod \"d73c2079-07e1-4465-83eb-5d39a04baf7d\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " Feb 16 20:57:33.177038 master-0 kubenswrapper[7926]: I0216 20:57:33.176978 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-proxy-ca-bundles\") pod \"d73c2079-07e1-4465-83eb-5d39a04baf7d\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " Feb 16 20:57:33.177679 master-0 kubenswrapper[7926]: I0216 20:57:33.177623 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d73c2079-07e1-4465-83eb-5d39a04baf7d" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:33.178082 master-0 kubenswrapper[7926]: I0216 20:57:33.178051 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-config" (OuterVolumeSpecName: "config") pod "d73c2079-07e1-4465-83eb-5d39a04baf7d" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:33.178127 master-0 kubenswrapper[7926]: I0216 20:57:33.178109 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-config\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:33.178127 master-0 kubenswrapper[7926]: I0216 20:57:33.178122 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4k59\" (UniqueName: \"kubernetes.io/projected/e6701f93-e666-4aaf-b1b4-b4464c586a24-kube-api-access-l4k59\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:33.178191 master-0 kubenswrapper[7926]: I0216 20:57:33.178137 7926 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:33.178191 master-0 kubenswrapper[7926]: I0216 20:57:33.178151 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-config\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:33.179255 master-0 kubenswrapper[7926]: I0216 20:57:33.179209 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d73c2079-07e1-4465-83eb-5d39a04baf7d-kube-api-access-scd4p" (OuterVolumeSpecName: "kube-api-access-scd4p") pod "d73c2079-07e1-4465-83eb-5d39a04baf7d" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d"). InnerVolumeSpecName "kube-api-access-scd4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:57:33.180318 master-0 kubenswrapper[7926]: I0216 20:57:33.180240 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d73c2079-07e1-4465-83eb-5d39a04baf7d" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:57:33.238208 master-0 kubenswrapper[7926]: I0216 20:57:33.238076 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 20:57:33.280097 master-0 kubenswrapper[7926]: I0216 20:57:33.279602 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scd4p\" (UniqueName: \"kubernetes.io/projected/d73c2079-07e1-4465-83eb-5d39a04baf7d-kube-api-access-scd4p\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:33.280097 master-0 kubenswrapper[7926]: I0216 20:57:33.279638 7926 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c2079-07e1-4465-83eb-5d39a04baf7d-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:33.375937 master-0 kubenswrapper[7926]: I0216 20:57:33.375897 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-7bbrn"] Feb 16 20:57:33.384035 master-0 kubenswrapper[7926]: W0216 20:57:33.384003 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dcfb4b8_1d96_4597_8e76_5c0c3a47c4cf.slice/crio-0334ad8c418e31c648e8c938f60c3ae9cf4f68761e776bef5ada2bade3f88833 WatchSource:0}: Error finding container 0334ad8c418e31c648e8c938f60c3ae9cf4f68761e776bef5ada2bade3f88833: Status 404 returned error can't find the container with id 0334ad8c418e31c648e8c938f60c3ae9cf4f68761e776bef5ada2bade3f88833 Feb 16 20:57:33.629092 master-0 kubenswrapper[7926]: I0216 20:57:33.629014 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 20:57:33.784026 master-0 kubenswrapper[7926]: I0216 20:57:33.783980 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca\") pod \"controller-manager-7585c94cb9-9n49k\" (UID: \"d73c2079-07e1-4465-83eb-5d39a04baf7d\") " pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:33.784191 master-0 kubenswrapper[7926]: E0216 20:57:33.784101 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:33.784191 master-0 kubenswrapper[7926]: E0216 20:57:33.784162 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca podName:d73c2079-07e1-4465-83eb-5d39a04baf7d nodeName:}" failed. No retries permitted until 2026-02-16 20:57:49.784147177 +0000 UTC m=+41.419047477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca") pod "controller-manager-7585c94cb9-9n49k" (UID: "d73c2079-07e1-4465-83eb-5d39a04baf7d") : configmap "client-ca" not found Feb 16 20:57:34.041362 master-0 kubenswrapper[7926]: I0216 20:57:34.041287 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7bbrn" event={"ID":"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf","Type":"ContainerStarted","Data":"0334ad8c418e31c648e8c938f60c3ae9cf4f68761e776bef5ada2bade3f88833"} Feb 16 20:57:34.042366 master-0 kubenswrapper[7926]: I0216 20:57:34.042333 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"3d416d98-ee7c-4481-9721-861ccd91685d","Type":"ContainerStarted","Data":"8bbcb4e0fb94b168b2c18c0ad45486fda3e89c4340348d1ee5d8cea24b562c67"} Feb 16 20:57:34.042366 master-0 kubenswrapper[7926]: I0216 20:57:34.042359 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"3d416d98-ee7c-4481-9721-861ccd91685d","Type":"ContainerStarted","Data":"363e6d9151e8f74d699facea1b9fd8436a80e76af370ce89bfd959fd35f30873"} Feb 16 20:57:34.042495 master-0 kubenswrapper[7926]: I0216 20:57:34.042388 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2" Feb 16 20:57:34.042495 master-0 kubenswrapper[7926]: I0216 20:57:34.042388 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7585c94cb9-9n49k" Feb 16 20:57:34.059713 master-0 kubenswrapper[7926]: I0216 20:57:34.059593 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=2.059563452 podStartE2EDuration="2.059563452s" podCreationTimestamp="2026-02-16 20:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:34.05738865 +0000 UTC m=+25.692288960" watchObservedRunningTime="2026-02-16 20:57:34.059563452 +0000 UTC m=+25.694463752" Feb 16 20:57:34.097505 master-0 kubenswrapper[7926]: I0216 20:57:34.097395 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj"] Feb 16 20:57:34.098183 master-0 kubenswrapper[7926]: I0216 20:57:34.098090 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2"] Feb 16 20:57:34.098183 master-0 kubenswrapper[7926]: I0216 20:57:34.098187 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.101064 master-0 kubenswrapper[7926]: I0216 20:57:34.101018 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 20:57:34.101496 master-0 kubenswrapper[7926]: I0216 20:57:34.101462 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 20:57:34.101603 master-0 kubenswrapper[7926]: I0216 20:57:34.101557 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 20:57:34.101830 master-0 kubenswrapper[7926]: I0216 20:57:34.101807 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 20:57:34.101935 master-0 kubenswrapper[7926]: I0216 20:57:34.101610 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 20:57:34.102279 master-0 kubenswrapper[7926]: I0216 20:57:34.102216 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2"] Feb 16 20:57:34.115722 master-0 kubenswrapper[7926]: I0216 20:57:34.115560 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj"] Feb 16 20:57:34.155669 master-0 kubenswrapper[7926]: I0216 20:57:34.155566 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7585c94cb9-9n49k"] Feb 16 20:57:34.156594 master-0 kubenswrapper[7926]: I0216 20:57:34.156542 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7585c94cb9-9n49k"] Feb 16 20:57:34.188282 master-0 kubenswrapper[7926]: I0216 20:57:34.188154 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.188282 master-0 kubenswrapper[7926]: I0216 20:57:34.188219 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34eb2829-2e5d-455d-9218-cad202a49e30-serving-cert\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.188521 master-0 kubenswrapper[7926]: I0216 20:57:34.188297 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsqnr\" (UniqueName: \"kubernetes.io/projected/34eb2829-2e5d-455d-9218-cad202a49e30-kube-api-access-fsqnr\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.188521 master-0 kubenswrapper[7926]: I0216 20:57:34.188333 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-config\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.188521 master-0 kubenswrapper[7926]: I0216 20:57:34.188382 7926 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6701f93-e666-4aaf-b1b4-b4464c586a24-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:34.188521 master-0 kubenswrapper[7926]: I0216 20:57:34.188397 7926 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c2079-07e1-4465-83eb-5d39a04baf7d-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:34.188521 master-0 kubenswrapper[7926]: I0216 20:57:34.188411 7926 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6701f93-e666-4aaf-b1b4-b4464c586a24-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:34.289926 master-0 kubenswrapper[7926]: I0216 20:57:34.289848 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.289926 master-0 kubenswrapper[7926]: I0216 20:57:34.289921 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34eb2829-2e5d-455d-9218-cad202a49e30-serving-cert\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.290291 master-0 kubenswrapper[7926]: I0216 20:57:34.289993 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsqnr\" (UniqueName: \"kubernetes.io/projected/34eb2829-2e5d-455d-9218-cad202a49e30-kube-api-access-fsqnr\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.290291 master-0 kubenswrapper[7926]: I0216 20:57:34.290025 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-config\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.291128 master-0 kubenswrapper[7926]: I0216 20:57:34.291097 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-config\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.291242 master-0 kubenswrapper[7926]: E0216 20:57:34.291176 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:34.291398 master-0 kubenswrapper[7926]: E0216 20:57:34.291365 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca podName:34eb2829-2e5d-455d-9218-cad202a49e30 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:34.791325529 +0000 UTC m=+26.426225869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca") pod "route-controller-manager-89c945d44-2smzj" (UID: "34eb2829-2e5d-455d-9218-cad202a49e30") : configmap "client-ca" not found Feb 16 20:57:34.295103 master-0 kubenswrapper[7926]: I0216 20:57:34.294797 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34eb2829-2e5d-455d-9218-cad202a49e30-serving-cert\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.308755 master-0 kubenswrapper[7926]: I0216 20:57:34.308682 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsqnr\" (UniqueName: \"kubernetes.io/projected/34eb2829-2e5d-455d-9218-cad202a49e30-kube-api-access-fsqnr\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.604984 master-0 kubenswrapper[7926]: I0216 20:57:34.604908 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw"] Feb 16 20:57:34.605312 master-0 kubenswrapper[7926]: I0216 20:57:34.605255 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" podUID="3a012b98-9341-41a3-9321-0a099f8bb9da" containerName="cluster-version-operator" containerID="cri-o://de2563beb136a0bbda40935e3b66cf97e3510fb56b6c3e8e1dcda8f4301e2a47" gracePeriod=130 Feb 16 20:57:34.743422 master-0 kubenswrapper[7926]: I0216 20:57:34.743366 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d73c2079-07e1-4465-83eb-5d39a04baf7d" path="/var/lib/kubelet/pods/d73c2079-07e1-4465-83eb-5d39a04baf7d/volumes" Feb 16 20:57:34.743898 master-0 kubenswrapper[7926]: I0216 20:57:34.743729 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6701f93-e666-4aaf-b1b4-b4464c586a24" path="/var/lib/kubelet/pods/e6701f93-e666-4aaf-b1b4-b4464c586a24/volumes" Feb 16 20:57:34.798475 master-0 kubenswrapper[7926]: I0216 20:57:34.798397 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:34.798718 master-0 kubenswrapper[7926]: E0216 20:57:34.798667 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:34.798798 master-0 kubenswrapper[7926]: E0216 20:57:34.798761 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca podName:34eb2829-2e5d-455d-9218-cad202a49e30 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:35.798735098 +0000 UTC m=+27.433635408 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca") pod "route-controller-manager-89c945d44-2smzj" (UID: "34eb2829-2e5d-455d-9218-cad202a49e30") : configmap "client-ca" not found Feb 16 20:57:35.818930 master-0 kubenswrapper[7926]: I0216 20:57:35.818142 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:35.818930 master-0 kubenswrapper[7926]: E0216 20:57:35.818324 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:35.818930 master-0 kubenswrapper[7926]: E0216 20:57:35.818376 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca podName:34eb2829-2e5d-455d-9218-cad202a49e30 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:37.818359291 +0000 UTC m=+29.453259581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca") pod "route-controller-manager-89c945d44-2smzj" (UID: "34eb2829-2e5d-455d-9218-cad202a49e30") : configmap "client-ca" not found Feb 16 20:57:36.066320 master-0 kubenswrapper[7926]: I0216 20:57:36.066236 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7bbrn" event={"ID":"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf","Type":"ContainerStarted","Data":"a3c9bdb5c46c570dfeafe9033e115957d8dc64e9abc1e952434f1790e1d55ed5"} Feb 16 20:57:36.067898 master-0 kubenswrapper[7926]: I0216 20:57:36.067860 7926 generic.go:334] "Generic (PLEG): container finished" podID="3a012b98-9341-41a3-9321-0a099f8bb9da" containerID="de2563beb136a0bbda40935e3b66cf97e3510fb56b6c3e8e1dcda8f4301e2a47" exitCode=0 Feb 16 20:57:36.067898 master-0 kubenswrapper[7926]: I0216 20:57:36.067897 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" event={"ID":"3a012b98-9341-41a3-9321-0a099f8bb9da","Type":"ContainerDied","Data":"de2563beb136a0bbda40935e3b66cf97e3510fb56b6c3e8e1dcda8f4301e2a47"} Feb 16 20:57:36.286586 master-0 kubenswrapper[7926]: I0216 20:57:36.286529 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56b4b57b4f-5nr85"] Feb 16 20:57:36.287205 master-0 kubenswrapper[7926]: I0216 20:57:36.287180 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.289203 master-0 kubenswrapper[7926]: I0216 20:57:36.289151 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 20:57:36.289553 master-0 kubenswrapper[7926]: I0216 20:57:36.289504 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 20:57:36.289553 master-0 kubenswrapper[7926]: I0216 20:57:36.289538 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 20:57:36.290019 master-0 kubenswrapper[7926]: I0216 20:57:36.289978 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 20:57:36.292258 master-0 kubenswrapper[7926]: I0216 20:57:36.292217 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 20:57:36.295689 master-0 kubenswrapper[7926]: I0216 20:57:36.295624 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56b4b57b4f-5nr85"] Feb 16 20:57:36.297778 master-0 kubenswrapper[7926]: I0216 20:57:36.297751 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 20:57:36.323867 master-0 kubenswrapper[7926]: I0216 20:57:36.323817 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-proxy-ca-bundles\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.324102 master-0 kubenswrapper[7926]: I0216 20:57:36.323904 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-config\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.324102 master-0 kubenswrapper[7926]: I0216 20:57:36.323936 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk65s\" (UniqueName: \"kubernetes.io/projected/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-kube-api-access-jk65s\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.324246 master-0 kubenswrapper[7926]: I0216 20:57:36.324207 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-serving-cert\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.324285 master-0 kubenswrapper[7926]: I0216 20:57:36.324256 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.426196 master-0 kubenswrapper[7926]: I0216 20:57:36.426135 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-serving-cert\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.426196 master-0 kubenswrapper[7926]: I0216 20:57:36.426183 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.426526 master-0 kubenswrapper[7926]: I0216 20:57:36.426500 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-proxy-ca-bundles\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.426809 master-0 kubenswrapper[7926]: E0216 20:57:36.426633 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:36.427856 master-0 kubenswrapper[7926]: I0216 20:57:36.426880 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-config\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.427856 master-0 kubenswrapper[7926]: E0216 20:57:36.426957 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca podName:47a4e44e-ec4a-4e8c-a968-a37c81771bfc nodeName:}" failed. No retries permitted until 2026-02-16 20:57:36.926912642 +0000 UTC m=+28.561812992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca") pod "controller-manager-56b4b57b4f-5nr85" (UID: "47a4e44e-ec4a-4e8c-a968-a37c81771bfc") : configmap "client-ca" not found Feb 16 20:57:36.428165 master-0 kubenswrapper[7926]: I0216 20:57:36.427890 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk65s\" (UniqueName: \"kubernetes.io/projected/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-kube-api-access-jk65s\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.428165 master-0 kubenswrapper[7926]: I0216 20:57:36.428096 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-proxy-ca-bundles\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.428410 master-0 kubenswrapper[7926]: I0216 20:57:36.428367 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-config\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.432265 master-0 kubenswrapper[7926]: I0216 20:57:36.432015 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-serving-cert\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.452670 master-0 kubenswrapper[7926]: I0216 20:57:36.452589 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk65s\" (UniqueName: \"kubernetes.io/projected/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-kube-api-access-jk65s\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.501779 master-0 kubenswrapper[7926]: I0216 20:57:36.501714 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg"] Feb 16 20:57:36.505005 master-0 kubenswrapper[7926]: I0216 20:57:36.502640 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.509714 master-0 kubenswrapper[7926]: I0216 20:57:36.509130 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 20:57:36.509714 master-0 kubenswrapper[7926]: I0216 20:57:36.509415 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 20:57:36.510070 master-0 kubenswrapper[7926]: I0216 20:57:36.510015 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 20:57:36.511736 master-0 kubenswrapper[7926]: I0216 20:57:36.511284 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg"] Feb 16 20:57:36.514229 master-0 kubenswrapper[7926]: I0216 20:57:36.513831 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 20:57:36.595944 master-0 kubenswrapper[7926]: I0216 20:57:36.595710 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g"] Feb 16 20:57:36.596548 master-0 kubenswrapper[7926]: I0216 20:57:36.596514 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.598403 master-0 kubenswrapper[7926]: I0216 20:57:36.598351 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 20:57:36.598633 master-0 kubenswrapper[7926]: I0216 20:57:36.598559 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 20:57:36.605771 master-0 kubenswrapper[7926]: I0216 20:57:36.605716 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g"] Feb 16 20:57:36.606154 master-0 kubenswrapper[7926]: I0216 20:57:36.606125 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 20:57:36.634397 master-0 kubenswrapper[7926]: I0216 20:57:36.634335 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.634397 master-0 kubenswrapper[7926]: I0216 20:57:36.634398 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxtft\" (UniqueName: \"kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-kube-api-access-vxtft\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.634545 master-0 kubenswrapper[7926]: I0216 20:57:36.634478 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.634545 master-0 kubenswrapper[7926]: I0216 20:57:36.634527 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-cache\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.634669 master-0 kubenswrapper[7926]: I0216 20:57:36.634607 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/e8194cdc-3133-49e2-9579-a747c0bf2b16-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.634730 master-0 kubenswrapper[7926]: I0216 20:57:36.634673 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.634730 master-0 kubenswrapper[7926]: I0216 20:57:36.634719 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.634808 master-0 kubenswrapper[7926]: I0216 20:57:36.634737 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e8194cdc-3133-49e2-9579-a747c0bf2b16-cache\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.634853 master-0 kubenswrapper[7926]: I0216 20:57:36.634830 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.634989 master-0 kubenswrapper[7926]: I0216 20:57:36.634873 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxvhm\" (UniqueName: \"kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-kube-api-access-hxvhm\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.635032 master-0 kubenswrapper[7926]: I0216 20:57:36.634986 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.689337 master-0 kubenswrapper[7926]: I0216 20:57:36.689295 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:36.736155 master-0 kubenswrapper[7926]: I0216 20:57:36.736110 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-ssl-certs\") pod \"3a012b98-9341-41a3-9321-0a099f8bb9da\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " Feb 16 20:57:36.736256 master-0 kubenswrapper[7926]: I0216 20:57:36.736177 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") pod \"3a012b98-9341-41a3-9321-0a099f8bb9da\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " Feb 16 20:57:36.736256 master-0 kubenswrapper[7926]: I0216 20:57:36.736198 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-cvo-updatepayloads\") pod \"3a012b98-9341-41a3-9321-0a099f8bb9da\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " Feb 16 20:57:36.736256 master-0 kubenswrapper[7926]: I0216 20:57:36.736227 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a012b98-9341-41a3-9321-0a099f8bb9da-service-ca\") pod \"3a012b98-9341-41a3-9321-0a099f8bb9da\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " Feb 16 20:57:36.736343 master-0 kubenswrapper[7926]: I0216 20:57:36.736255 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a012b98-9341-41a3-9321-0a099f8bb9da-kube-api-access\") pod \"3a012b98-9341-41a3-9321-0a099f8bb9da\" (UID: \"3a012b98-9341-41a3-9321-0a099f8bb9da\") " Feb 16 20:57:36.736421 master-0 kubenswrapper[7926]: I0216 20:57:36.736393 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.736460 master-0 kubenswrapper[7926]: I0216 20:57:36.736427 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxtft\" (UniqueName: \"kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-kube-api-access-vxtft\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.736460 master-0 kubenswrapper[7926]: I0216 20:57:36.736453 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.736521 master-0 kubenswrapper[7926]: I0216 20:57:36.736468 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/e8194cdc-3133-49e2-9579-a747c0bf2b16-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.736521 master-0 kubenswrapper[7926]: I0216 20:57:36.736486 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-cache\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.736521 master-0 kubenswrapper[7926]: I0216 20:57:36.736504 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.736701 master-0 kubenswrapper[7926]: I0216 20:57:36.736527 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.736701 master-0 kubenswrapper[7926]: I0216 20:57:36.736542 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e8194cdc-3133-49e2-9579-a747c0bf2b16-cache\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.736701 master-0 kubenswrapper[7926]: I0216 20:57:36.736591 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.736701 master-0 kubenswrapper[7926]: I0216 20:57:36.736610 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxvhm\" (UniqueName: \"kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-kube-api-access-hxvhm\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.736701 master-0 kubenswrapper[7926]: I0216 20:57:36.736628 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.736965 master-0 kubenswrapper[7926]: I0216 20:57:36.736936 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.737024 master-0 kubenswrapper[7926]: I0216 20:57:36.737006 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "3a012b98-9341-41a3-9321-0a099f8bb9da" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:57:36.738050 master-0 kubenswrapper[7926]: I0216 20:57:36.738001 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "3a012b98-9341-41a3-9321-0a099f8bb9da" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:57:36.738681 master-0 kubenswrapper[7926]: I0216 20:57:36.738577 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-cache\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.738681 master-0 kubenswrapper[7926]: I0216 20:57:36.738575 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.738681 master-0 kubenswrapper[7926]: I0216 20:57:36.738657 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.738833 master-0 kubenswrapper[7926]: I0216 20:57:36.738811 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.739412 master-0 kubenswrapper[7926]: I0216 20:57:36.739281 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a012b98-9341-41a3-9321-0a099f8bb9da-service-ca" (OuterVolumeSpecName: "service-ca") pod "3a012b98-9341-41a3-9321-0a099f8bb9da" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:36.739412 master-0 kubenswrapper[7926]: I0216 20:57:36.739334 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e8194cdc-3133-49e2-9579-a747c0bf2b16-cache\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.740438 master-0 kubenswrapper[7926]: I0216 20:57:36.740397 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a012b98-9341-41a3-9321-0a099f8bb9da" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:57:36.741037 master-0 kubenswrapper[7926]: I0216 20:57:36.740985 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a012b98-9341-41a3-9321-0a099f8bb9da-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3a012b98-9341-41a3-9321-0a099f8bb9da" (UID: "3a012b98-9341-41a3-9321-0a099f8bb9da"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:57:36.743420 master-0 kubenswrapper[7926]: I0216 20:57:36.743390 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.743896 master-0 kubenswrapper[7926]: I0216 20:57:36.743857 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.750483 master-0 kubenswrapper[7926]: I0216 20:57:36.750434 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/e8194cdc-3133-49e2-9579-a747c0bf2b16-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.754999 master-0 kubenswrapper[7926]: I0216 20:57:36.754954 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxvhm\" (UniqueName: \"kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-kube-api-access-hxvhm\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.763954 master-0 kubenswrapper[7926]: I0216 20:57:36.763909 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxtft\" (UniqueName: \"kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-kube-api-access-vxtft\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.833379 master-0 kubenswrapper[7926]: I0216 20:57:36.833322 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:36.837739 master-0 kubenswrapper[7926]: I0216 20:57:36.837695 7926 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:36.837739 master-0 kubenswrapper[7926]: I0216 20:57:36.837735 7926 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a012b98-9341-41a3-9321-0a099f8bb9da-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:36.837853 master-0 kubenswrapper[7926]: I0216 20:57:36.837746 7926 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3a012b98-9341-41a3-9321-0a099f8bb9da-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:36.837853 master-0 kubenswrapper[7926]: I0216 20:57:36.837759 7926 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a012b98-9341-41a3-9321-0a099f8bb9da-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:36.837853 master-0 kubenswrapper[7926]: I0216 20:57:36.837769 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a012b98-9341-41a3-9321-0a099f8bb9da-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:36.915935 master-0 kubenswrapper[7926]: I0216 20:57:36.915877 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:36.939192 master-0 kubenswrapper[7926]: I0216 20:57:36.939151 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:36.939412 master-0 kubenswrapper[7926]: E0216 20:57:36.939357 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:36.939467 master-0 kubenswrapper[7926]: E0216 20:57:36.939416 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca podName:47a4e44e-ec4a-4e8c-a968-a37c81771bfc nodeName:}" failed. No retries permitted until 2026-02-16 20:57:37.939399826 +0000 UTC m=+29.574300116 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca") pod "controller-manager-56b4b57b4f-5nr85" (UID: "47a4e44e-ec4a-4e8c-a968-a37c81771bfc") : configmap "client-ca" not found Feb 16 20:57:37.073170 master-0 kubenswrapper[7926]: I0216 20:57:37.073105 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" event={"ID":"3a012b98-9341-41a3-9321-0a099f8bb9da","Type":"ContainerDied","Data":"967086e3afdf48136bba09dec7a50552d530cabf996c944b7aa7f47f1a0f30ff"} Feb 16 20:57:37.073170 master-0 kubenswrapper[7926]: I0216 20:57:37.073171 7926 scope.go:117] "RemoveContainer" containerID="de2563beb136a0bbda40935e3b66cf97e3510fb56b6c3e8e1dcda8f4301e2a47" Feb 16 20:57:37.073417 master-0 kubenswrapper[7926]: I0216 20:57:37.073277 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw" Feb 16 20:57:37.077560 master-0 kubenswrapper[7926]: I0216 20:57:37.077290 7926 generic.go:334] "Generic (PLEG): container finished" podID="d2501eec-47c8-47bc-b0c9-28d94c06075b" containerID="fac6599aca0de28d90bc133433b080122ce047275bd07a83287cf6be8f57463e" exitCode=0 Feb 16 20:57:37.077560 master-0 kubenswrapper[7926]: I0216 20:57:37.077363 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" event={"ID":"d2501eec-47c8-47bc-b0c9-28d94c06075b","Type":"ContainerDied","Data":"fac6599aca0de28d90bc133433b080122ce047275bd07a83287cf6be8f57463e"} Feb 16 20:57:37.080765 master-0 kubenswrapper[7926]: I0216 20:57:37.080697 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7bbrn" event={"ID":"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf","Type":"ContainerStarted","Data":"9e2fd78f2965e851d3f9a8c562693cb34badc6c1a0ecf3b6d8362a8e34893103"} Feb 16 20:57:37.080958 master-0 kubenswrapper[7926]: I0216 20:57:37.080856 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:37.088146 master-0 kubenswrapper[7926]: I0216 20:57:37.088107 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw"] Feb 16 20:57:37.091094 master-0 kubenswrapper[7926]: I0216 20:57:37.091068 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw"] Feb 16 20:57:37.127077 master-0 kubenswrapper[7926]: I0216 20:57:37.127014 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-649c4f5445-n994s"] Feb 16 20:57:37.127294 master-0 kubenswrapper[7926]: E0216 20:57:37.127267 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a012b98-9341-41a3-9321-0a099f8bb9da" containerName="cluster-version-operator" Feb 16 20:57:37.127294 master-0 kubenswrapper[7926]: I0216 20:57:37.127287 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a012b98-9341-41a3-9321-0a099f8bb9da" containerName="cluster-version-operator" Feb 16 20:57:37.127419 master-0 kubenswrapper[7926]: I0216 20:57:37.127395 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a012b98-9341-41a3-9321-0a099f8bb9da" containerName="cluster-version-operator" Feb 16 20:57:37.127908 master-0 kubenswrapper[7926]: I0216 20:57:37.127866 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.130890 master-0 kubenswrapper[7926]: I0216 20:57:37.130447 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 20:57:37.130890 master-0 kubenswrapper[7926]: I0216 20:57:37.130687 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 20:57:37.130890 master-0 kubenswrapper[7926]: I0216 20:57:37.130702 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 20:57:37.134429 master-0 kubenswrapper[7926]: I0216 20:57:37.134311 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-7bbrn" podStartSLOduration=4.2854653129999996 podStartE2EDuration="6.134292978s" podCreationTimestamp="2026-02-16 20:57:31 +0000 UTC" firstStartedPulling="2026-02-16 20:57:33.38627118 +0000 UTC m=+25.021171480" lastFinishedPulling="2026-02-16 20:57:35.235098855 +0000 UTC m=+26.869999145" observedRunningTime="2026-02-16 20:57:37.130444059 +0000 UTC m=+28.765344369" watchObservedRunningTime="2026-02-16 20:57:37.134292978 +0000 UTC m=+28.769193278" Feb 16 20:57:37.210817 master-0 kubenswrapper[7926]: I0216 20:57:37.208840 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg"] Feb 16 20:57:37.215188 master-0 kubenswrapper[7926]: W0216 20:57:37.215125 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8194cdc_3133_49e2_9579_a747c0bf2b16.slice/crio-c6c5fc997a3d90f0f136390ca95bcbc1e110994ac3cdfcc2e3e8e90f78ca1dd9 WatchSource:0}: Error finding container c6c5fc997a3d90f0f136390ca95bcbc1e110994ac3cdfcc2e3e8e90f78ca1dd9: Status 404 returned error can't find the container with id c6c5fc997a3d90f0f136390ca95bcbc1e110994ac3cdfcc2e3e8e90f78ca1dd9 Feb 16 20:57:37.247175 master-0 kubenswrapper[7926]: I0216 20:57:37.247009 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.247175 master-0 kubenswrapper[7926]: I0216 20:57:37.247079 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.247175 master-0 kubenswrapper[7926]: I0216 20:57:37.247118 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-serving-cert\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.247175 master-0 kubenswrapper[7926]: I0216 20:57:37.247167 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-kube-api-access\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.247539 master-0 kubenswrapper[7926]: I0216 20:57:37.247239 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-service-ca\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.292336 master-0 kubenswrapper[7926]: I0216 20:57:37.292281 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g"] Feb 16 20:57:37.317861 master-0 kubenswrapper[7926]: W0216 20:57:37.317810 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a986ba3_2aea_4133_a05b_f69d4e0d8d3b.slice/crio-33442d22098554ef2512c5bbab1d4a284aed4856345ee1eb8654ba065012ab94 WatchSource:0}: Error finding container 33442d22098554ef2512c5bbab1d4a284aed4856345ee1eb8654ba065012ab94: Status 404 returned error can't find the container with id 33442d22098554ef2512c5bbab1d4a284aed4856345ee1eb8654ba065012ab94 Feb 16 20:57:37.348462 master-0 kubenswrapper[7926]: I0216 20:57:37.348417 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-kube-api-access\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.348963 master-0 kubenswrapper[7926]: I0216 20:57:37.348723 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-service-ca\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.348963 master-0 kubenswrapper[7926]: I0216 20:57:37.348868 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.348963 master-0 kubenswrapper[7926]: I0216 20:57:37.348891 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.348963 master-0 kubenswrapper[7926]: I0216 20:57:37.348931 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.349121 master-0 kubenswrapper[7926]: I0216 20:57:37.348978 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.349121 master-0 kubenswrapper[7926]: I0216 20:57:37.349062 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-serving-cert\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.352061 master-0 kubenswrapper[7926]: I0216 20:57:37.351185 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-service-ca\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.352631 master-0 kubenswrapper[7926]: I0216 20:57:37.352388 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-serving-cert\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.367141 master-0 kubenswrapper[7926]: I0216 20:57:37.367108 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-kube-api-access\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.456243 master-0 kubenswrapper[7926]: I0216 20:57:37.456199 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 20:57:37.472206 master-0 kubenswrapper[7926]: W0216 20:57:37.471298 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5d4ac48_aed3_46b9_9b2a_d741121e05b4.slice/crio-b1c5e0970049830739dbde889218d9f83f1d9720ddba4de32c1b5bd6626ed51d WatchSource:0}: Error finding container b1c5e0970049830739dbde889218d9f83f1d9720ddba4de32c1b5bd6626ed51d: Status 404 returned error can't find the container with id b1c5e0970049830739dbde889218d9f83f1d9720ddba4de32c1b5bd6626ed51d Feb 16 20:57:37.857925 master-0 kubenswrapper[7926]: I0216 20:57:37.857733 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:37.857925 master-0 kubenswrapper[7926]: E0216 20:57:37.857847 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:37.857925 master-0 kubenswrapper[7926]: E0216 20:57:37.857901 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca podName:34eb2829-2e5d-455d-9218-cad202a49e30 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:41.857883908 +0000 UTC m=+33.492784208 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca") pod "route-controller-manager-89c945d44-2smzj" (UID: "34eb2829-2e5d-455d-9218-cad202a49e30") : configmap "client-ca" not found Feb 16 20:57:37.959624 master-0 kubenswrapper[7926]: I0216 20:57:37.959393 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:37.959624 master-0 kubenswrapper[7926]: E0216 20:57:37.959536 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:37.959624 master-0 kubenswrapper[7926]: E0216 20:57:37.959597 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca podName:47a4e44e-ec4a-4e8c-a968-a37c81771bfc nodeName:}" failed. No retries permitted until 2026-02-16 20:57:39.959583006 +0000 UTC m=+31.594483296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca") pod "controller-manager-56b4b57b4f-5nr85" (UID: "47a4e44e-ec4a-4e8c-a968-a37c81771bfc") : configmap "client-ca" not found Feb 16 20:57:38.086443 master-0 kubenswrapper[7926]: I0216 20:57:38.086382 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" event={"ID":"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b","Type":"ContainerStarted","Data":"b1ac78292de0a544c15af274111c4e933c90f41d601dad32fc19d3dacdb54345"} Feb 16 20:57:38.086443 master-0 kubenswrapper[7926]: I0216 20:57:38.086440 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" event={"ID":"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b","Type":"ContainerStarted","Data":"33442d22098554ef2512c5bbab1d4a284aed4856345ee1eb8654ba065012ab94"} Feb 16 20:57:38.090694 master-0 kubenswrapper[7926]: I0216 20:57:38.089452 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" event={"ID":"e8194cdc-3133-49e2-9579-a747c0bf2b16","Type":"ContainerStarted","Data":"a76963335874f22d97778041d73ee6a0a7e3ffd325f9fb8a457626be3c8e5238"} Feb 16 20:57:38.090694 master-0 kubenswrapper[7926]: I0216 20:57:38.089507 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" event={"ID":"e8194cdc-3133-49e2-9579-a747c0bf2b16","Type":"ContainerStarted","Data":"22968a9882928f70bec5424cc2346763d1decd6df62181dc2fb45946d7faa2c0"} Feb 16 20:57:38.090694 master-0 kubenswrapper[7926]: I0216 20:57:38.089526 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" event={"ID":"e8194cdc-3133-49e2-9579-a747c0bf2b16","Type":"ContainerStarted","Data":"c6c5fc997a3d90f0f136390ca95bcbc1e110994ac3cdfcc2e3e8e90f78ca1dd9"} Feb 16 20:57:38.090694 master-0 kubenswrapper[7926]: I0216 20:57:38.090607 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:38.092877 master-0 kubenswrapper[7926]: I0216 20:57:38.092346 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" event={"ID":"d2501eec-47c8-47bc-b0c9-28d94c06075b","Type":"ContainerStarted","Data":"0856f0c22b60435b85fb84c2015179efe4d4434ed38a1e900790fcd7531c6189"} Feb 16 20:57:38.092877 master-0 kubenswrapper[7926]: I0216 20:57:38.092373 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" event={"ID":"d2501eec-47c8-47bc-b0c9-28d94c06075b","Type":"ContainerStarted","Data":"4e6698adeb0259c7abcd8ca7be9fcd53fc2f448ac8a7d94023fba495185a15f8"} Feb 16 20:57:38.094481 master-0 kubenswrapper[7926]: I0216 20:57:38.094459 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" event={"ID":"a5d4ac48-aed3-46b9-9b2a-d741121e05b4","Type":"ContainerStarted","Data":"22be26c79a1d2adc3db5f6e113ba92cfcf47f9a286ce35fb6273d18f0ea1545e"} Feb 16 20:57:38.094636 master-0 kubenswrapper[7926]: I0216 20:57:38.094619 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" event={"ID":"a5d4ac48-aed3-46b9-9b2a-d741121e05b4","Type":"ContainerStarted","Data":"b1c5e0970049830739dbde889218d9f83f1d9720ddba4de32c1b5bd6626ed51d"} Feb 16 20:57:38.743606 master-0 kubenswrapper[7926]: I0216 20:57:38.743525 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a012b98-9341-41a3-9321-0a099f8bb9da" path="/var/lib/kubelet/pods/3a012b98-9341-41a3-9321-0a099f8bb9da/volumes" Feb 16 20:57:39.102488 master-0 kubenswrapper[7926]: I0216 20:57:39.102413 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" event={"ID":"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b","Type":"ContainerStarted","Data":"3ca24ad1f8d41b0227373cdca70f4d0ead865f343ffe91de92638dd9fb5c6f20"} Feb 16 20:57:39.525136 master-0 kubenswrapper[7926]: I0216 20:57:39.525052 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podStartSLOduration=3.525032887 podStartE2EDuration="3.525032887s" podCreationTimestamp="2026-02-16 20:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:38.676486875 +0000 UTC m=+30.311387175" watchObservedRunningTime="2026-02-16 20:57:39.525032887 +0000 UTC m=+31.159933187" Feb 16 20:57:39.525386 master-0 kubenswrapper[7926]: I0216 20:57:39.525247 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" podStartSLOduration=2.525241373 podStartE2EDuration="2.525241373s" podCreationTimestamp="2026-02-16 20:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:39.512049585 +0000 UTC m=+31.146949945" watchObservedRunningTime="2026-02-16 20:57:39.525241373 +0000 UTC m=+31.160141673" Feb 16 20:57:39.789249 master-0 kubenswrapper[7926]: I0216 20:57:39.789069 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" podStartSLOduration=7.284702729 podStartE2EDuration="12.789047586s" podCreationTimestamp="2026-02-16 20:57:27 +0000 UTC" firstStartedPulling="2026-02-16 20:57:31.254560638 +0000 UTC m=+22.889460938" lastFinishedPulling="2026-02-16 20:57:36.758905495 +0000 UTC m=+28.393805795" observedRunningTime="2026-02-16 20:57:39.781982954 +0000 UTC m=+31.416883274" watchObservedRunningTime="2026-02-16 20:57:39.789047586 +0000 UTC m=+31.423947886" Feb 16 20:57:39.829711 master-0 kubenswrapper[7926]: I0216 20:57:39.828710 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podStartSLOduration=3.82868617 podStartE2EDuration="3.82868617s" podCreationTimestamp="2026-02-16 20:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:39.822135932 +0000 UTC m=+31.457036242" watchObservedRunningTime="2026-02-16 20:57:39.82868617 +0000 UTC m=+31.463586470" Feb 16 20:57:39.839368 master-0 kubenswrapper[7926]: I0216 20:57:39.839257 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6"] Feb 16 20:57:39.841813 master-0 kubenswrapper[7926]: I0216 20:57:39.840102 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.854688 master-0 kubenswrapper[7926]: I0216 20:57:39.850569 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 20:57:39.854688 master-0 kubenswrapper[7926]: I0216 20:57:39.850743 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 20:57:39.854688 master-0 kubenswrapper[7926]: I0216 20:57:39.850798 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 20:57:39.854688 master-0 kubenswrapper[7926]: I0216 20:57:39.850866 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 20:57:39.854688 master-0 kubenswrapper[7926]: I0216 20:57:39.850807 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 20:57:39.854688 master-0 kubenswrapper[7926]: I0216 20:57:39.850963 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 20:57:39.854688 master-0 kubenswrapper[7926]: I0216 20:57:39.851028 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 20:57:39.854688 master-0 kubenswrapper[7926]: I0216 20:57:39.851042 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 20:57:39.854688 master-0 kubenswrapper[7926]: I0216 20:57:39.853840 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6"] Feb 16 20:57:39.892351 master-0 kubenswrapper[7926]: I0216 20:57:39.892284 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-client\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.892351 master-0 kubenswrapper[7926]: I0216 20:57:39.892351 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-trusted-ca-bundle\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.892609 master-0 kubenswrapper[7926]: I0216 20:57:39.892382 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf4qg\" (UniqueName: \"kubernetes.io/projected/bd49e653-3b42-4950-8f5f-2b2ecb683678-kube-api-access-kf4qg\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.892609 master-0 kubenswrapper[7926]: I0216 20:57:39.892429 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-serving-ca\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.892609 master-0 kubenswrapper[7926]: I0216 20:57:39.892447 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-dir\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.892609 master-0 kubenswrapper[7926]: I0216 20:57:39.892466 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-serving-cert\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.892609 master-0 kubenswrapper[7926]: I0216 20:57:39.892510 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-encryption-config\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.892609 master-0 kubenswrapper[7926]: I0216 20:57:39.892526 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-policies\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.993309 master-0 kubenswrapper[7926]: I0216 20:57:39.993251 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-serving-ca\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.993309 master-0 kubenswrapper[7926]: I0216 20:57:39.993301 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-serving-cert\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.993309 master-0 kubenswrapper[7926]: I0216 20:57:39.993317 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-dir\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.993683 master-0 kubenswrapper[7926]: I0216 20:57:39.993474 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:39.993683 master-0 kubenswrapper[7926]: E0216 20:57:39.993550 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:39.993683 master-0 kubenswrapper[7926]: E0216 20:57:39.993602 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca podName:47a4e44e-ec4a-4e8c-a968-a37c81771bfc nodeName:}" failed. No retries permitted until 2026-02-16 20:57:43.993587394 +0000 UTC m=+35.628487694 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca") pod "controller-manager-56b4b57b4f-5nr85" (UID: "47a4e44e-ec4a-4e8c-a968-a37c81771bfc") : configmap "client-ca" not found Feb 16 20:57:39.994511 master-0 kubenswrapper[7926]: I0216 20:57:39.994049 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-encryption-config\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.994511 master-0 kubenswrapper[7926]: I0216 20:57:39.994103 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-policies\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.994511 master-0 kubenswrapper[7926]: I0216 20:57:39.994104 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-serving-ca\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.994511 master-0 kubenswrapper[7926]: I0216 20:57:39.994139 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-dir\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.994511 master-0 kubenswrapper[7926]: I0216 20:57:39.994239 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-client\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.994511 master-0 kubenswrapper[7926]: I0216 20:57:39.994270 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-trusted-ca-bundle\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.994511 master-0 kubenswrapper[7926]: I0216 20:57:39.994489 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf4qg\" (UniqueName: \"kubernetes.io/projected/bd49e653-3b42-4950-8f5f-2b2ecb683678-kube-api-access-kf4qg\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.994862 master-0 kubenswrapper[7926]: I0216 20:57:39.994831 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-policies\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.994910 master-0 kubenswrapper[7926]: I0216 20:57:39.994864 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-trusted-ca-bundle\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:39.998160 master-0 kubenswrapper[7926]: I0216 20:57:39.998118 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-serving-cert\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:40.001035 master-0 kubenswrapper[7926]: I0216 20:57:40.000990 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-encryption-config\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:40.012663 master-0 kubenswrapper[7926]: I0216 20:57:40.012585 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-client\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:40.017129 master-0 kubenswrapper[7926]: I0216 20:57:40.017089 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf4qg\" (UniqueName: \"kubernetes.io/projected/bd49e653-3b42-4950-8f5f-2b2ecb683678-kube-api-access-kf4qg\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:40.106582 master-0 kubenswrapper[7926]: I0216 20:57:40.106441 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:40.177363 master-0 kubenswrapper[7926]: I0216 20:57:40.177305 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:40.431611 master-0 kubenswrapper[7926]: I0216 20:57:40.431430 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6"] Feb 16 20:57:40.865447 master-0 kubenswrapper[7926]: I0216 20:57:40.865344 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:40.865447 master-0 kubenswrapper[7926]: I0216 20:57:40.865419 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: I0216 20:57:40.952711 7926 patch_prober.go:28] interesting pod/apiserver-6bdb76b9b7-z46x6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]log ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]etcd ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]poststarthook/max-in-flight-filter ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]poststarthook/openshift.io-startinformers ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 20:57:40.952826 master-0 kubenswrapper[7926]: livez check failed Feb 16 20:57:40.954450 master-0 kubenswrapper[7926]: I0216 20:57:40.952813 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" podUID="d2501eec-47c8-47bc-b0c9-28d94c06075b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 20:57:41.111383 master-0 kubenswrapper[7926]: I0216 20:57:41.111333 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" event={"ID":"bd49e653-3b42-4950-8f5f-2b2ecb683678","Type":"ContainerStarted","Data":"02b45fb8e619cea5ccaf6f782fba75e7a7903a3e4348fde89b8d1bc48406b6c9"} Feb 16 20:57:41.412313 master-0 kubenswrapper[7926]: I0216 20:57:41.412226 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:41.412313 master-0 kubenswrapper[7926]: I0216 20:57:41.412308 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:41.412721 master-0 kubenswrapper[7926]: I0216 20:57:41.412557 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:41.412721 master-0 kubenswrapper[7926]: I0216 20:57:41.412678 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:41.412868 master-0 kubenswrapper[7926]: I0216 20:57:41.412818 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:41.412932 master-0 kubenswrapper[7926]: I0216 20:57:41.412888 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:41.412975 master-0 kubenswrapper[7926]: I0216 20:57:41.412933 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:41.416087 master-0 kubenswrapper[7926]: I0216 20:57:41.416025 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:41.416482 master-0 kubenswrapper[7926]: I0216 20:57:41.416415 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:41.416708 master-0 kubenswrapper[7926]: I0216 20:57:41.416642 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:41.417081 master-0 kubenswrapper[7926]: I0216 20:57:41.417045 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:41.417544 master-0 kubenswrapper[7926]: I0216 20:57:41.417492 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:41.417882 master-0 kubenswrapper[7926]: I0216 20:57:41.417847 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:41.420938 master-0 kubenswrapper[7926]: I0216 20:57:41.420904 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:41.658239 master-0 kubenswrapper[7926]: I0216 20:57:41.658160 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:41.658239 master-0 kubenswrapper[7926]: I0216 20:57:41.658223 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:41.658514 master-0 kubenswrapper[7926]: I0216 20:57:41.658257 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:41.658514 master-0 kubenswrapper[7926]: I0216 20:57:41.658275 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 20:57:41.658629 master-0 kubenswrapper[7926]: I0216 20:57:41.658187 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:41.664444 master-0 kubenswrapper[7926]: I0216 20:57:41.664360 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 20:57:41.666771 master-0 kubenswrapper[7926]: I0216 20:57:41.666750 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 20:57:41.920265 master-0 kubenswrapper[7926]: I0216 20:57:41.920036 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:41.920265 master-0 kubenswrapper[7926]: E0216 20:57:41.920216 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:41.920502 master-0 kubenswrapper[7926]: E0216 20:57:41.920314 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca podName:34eb2829-2e5d-455d-9218-cad202a49e30 nodeName:}" failed. No retries permitted until 2026-02-16 20:57:49.920294335 +0000 UTC m=+41.555194635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca") pod "route-controller-manager-89c945d44-2smzj" (UID: "34eb2829-2e5d-455d-9218-cad202a49e30") : configmap "client-ca" not found Feb 16 20:57:42.277426 master-0 kubenswrapper[7926]: I0216 20:57:42.277158 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn"] Feb 16 20:57:42.282513 master-0 kubenswrapper[7926]: I0216 20:57:42.282421 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq"] Feb 16 20:57:42.286914 master-0 kubenswrapper[7926]: I0216 20:57:42.286757 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6"] Feb 16 20:57:42.287766 master-0 kubenswrapper[7926]: I0216 20:57:42.287371 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g"] Feb 16 20:57:42.294369 master-0 kubenswrapper[7926]: W0216 20:57:42.292291 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b035e85_b2b0_4dee_bb86_3465fc4b98a8.slice/crio-d731a0126023b327423b0d92ac9091c1188b42fa4686eb6ad7cba3b766448624 WatchSource:0}: Error finding container d731a0126023b327423b0d92ac9091c1188b42fa4686eb6ad7cba3b766448624: Status 404 returned error can't find the container with id d731a0126023b327423b0d92ac9091c1188b42fa4686eb6ad7cba3b766448624 Feb 16 20:57:42.294369 master-0 kubenswrapper[7926]: W0216 20:57:42.294337 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e618c5c_52be_4b52_b426_b92555dee9de.slice/crio-d306354fd5d2178f348beb7a119f77d313ccc80e6928076b9869dfc8a33d0edf WatchSource:0}: Error finding container d306354fd5d2178f348beb7a119f77d313ccc80e6928076b9869dfc8a33d0edf: Status 404 returned error can't find the container with id d306354fd5d2178f348beb7a119f77d313ccc80e6928076b9869dfc8a33d0edf Feb 16 20:57:42.299559 master-0 kubenswrapper[7926]: W0216 20:57:42.295876 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec7dd4ea_a139_45d4_96a4_506da1567292.slice/crio-b2fa0e56a1525a9dc4cb1eed44cc6376b6ac0d1c2fab2be1bd2cb007a4f90f8a WatchSource:0}: Error finding container b2fa0e56a1525a9dc4cb1eed44cc6376b6ac0d1c2fab2be1bd2cb007a4f90f8a: Status 404 returned error can't find the container with id b2fa0e56a1525a9dc4cb1eed44cc6376b6ac0d1c2fab2be1bd2cb007a4f90f8a Feb 16 20:57:42.299559 master-0 kubenswrapper[7926]: W0216 20:57:42.298218 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4c9b781_14c0_469c_bb9e_0c3982a04520.slice/crio-27e39bf106b6e002c0125d685214889286fc25d34ba09141b24632bec0751f4d WatchSource:0}: Error finding container 27e39bf106b6e002c0125d685214889286fc25d34ba09141b24632bec0751f4d: Status 404 returned error can't find the container with id 27e39bf106b6e002c0125d685214889286fc25d34ba09141b24632bec0751f4d Feb 16 20:57:42.361739 master-0 kubenswrapper[7926]: I0216 20:57:42.361686 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq"] Feb 16 20:57:42.372551 master-0 kubenswrapper[7926]: I0216 20:57:42.372499 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-z46jt"] Feb 16 20:57:42.375603 master-0 kubenswrapper[7926]: I0216 20:57:42.375084 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-42bw7"] Feb 16 20:57:42.903492 master-0 kubenswrapper[7926]: W0216 20:57:42.903325 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb27de289_c0f9_47ff_aac6_15b7bc1b178a.slice/crio-7836160a631ad4fabd13fade7e117d0a195ed40a8c1f33bde283fef44ab0f21f WatchSource:0}: Error finding container 7836160a631ad4fabd13fade7e117d0a195ed40a8c1f33bde283fef44ab0f21f: Status 404 returned error can't find the container with id 7836160a631ad4fabd13fade7e117d0a195ed40a8c1f33bde283fef44ab0f21f Feb 16 20:57:42.906480 master-0 kubenswrapper[7926]: W0216 20:57:42.906430 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d453639_52ed_4a14_a2ee_02cf9acc2f7c.slice/crio-0048dbcae18fdbd149a49da2679d70bbb9de5e907689064aaea0ab32348a1024 WatchSource:0}: Error finding container 0048dbcae18fdbd149a49da2679d70bbb9de5e907689064aaea0ab32348a1024: Status 404 returned error can't find the container with id 0048dbcae18fdbd149a49da2679d70bbb9de5e907689064aaea0ab32348a1024 Feb 16 20:57:42.908072 master-0 kubenswrapper[7926]: W0216 20:57:42.907856 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb28234d1_1d9a_4d9f_9ad1_e3c682bed492.slice/crio-07e2ee4df3da5cd46dd10fb4afd51a212c46737743b9be4c1d162a76d568a6fd WatchSource:0}: Error finding container 07e2ee4df3da5cd46dd10fb4afd51a212c46737743b9be4c1d162a76d568a6fd: Status 404 returned error can't find the container with id 07e2ee4df3da5cd46dd10fb4afd51a212c46737743b9be4c1d162a76d568a6fd Feb 16 20:57:43.115996 master-0 kubenswrapper[7926]: I0216 20:57:43.115925 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 20:57:43.116504 master-0 kubenswrapper[7926]: I0216 20:57:43.116410 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb" containerName="installer" containerID="cri-o://4d5f546c2421eec3805ff12860007eff73909bb7626878d72e7e0b55753734ca" gracePeriod=30 Feb 16 20:57:43.128370 master-0 kubenswrapper[7926]: I0216 20:57:43.124529 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" event={"ID":"b28234d1-1d9a-4d9f-9ad1-e3c682bed492","Type":"ContainerStarted","Data":"07e2ee4df3da5cd46dd10fb4afd51a212c46737743b9be4c1d162a76d568a6fd"} Feb 16 20:57:43.128370 master-0 kubenswrapper[7926]: I0216 20:57:43.127998 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" event={"ID":"4b035e85-b2b0-4dee-bb86-3465fc4b98a8","Type":"ContainerStarted","Data":"74bcb9c0e0e6190f4682d2a1f22029d9499551420f56ffed526a997deaabbd90"} Feb 16 20:57:43.128370 master-0 kubenswrapper[7926]: I0216 20:57:43.128021 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" event={"ID":"4b035e85-b2b0-4dee-bb86-3465fc4b98a8","Type":"ContainerStarted","Data":"d731a0126023b327423b0d92ac9091c1188b42fa4686eb6ad7cba3b766448624"} Feb 16 20:57:43.142598 master-0 kubenswrapper[7926]: I0216 20:57:43.142494 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" event={"ID":"ec7dd4ea-a139-45d4-96a4-506da1567292","Type":"ContainerStarted","Data":"b2fa0e56a1525a9dc4cb1eed44cc6376b6ac0d1c2fab2be1bd2cb007a4f90f8a"} Feb 16 20:57:43.150377 master-0 kubenswrapper[7926]: I0216 20:57:43.150338 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-42bw7" event={"ID":"1d453639-52ed-4a14-a2ee-02cf9acc2f7c","Type":"ContainerStarted","Data":"0048dbcae18fdbd149a49da2679d70bbb9de5e907689064aaea0ab32348a1024"} Feb 16 20:57:43.153020 master-0 kubenswrapper[7926]: I0216 20:57:43.152945 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" event={"ID":"2e618c5c-52be-4b52-b426-b92555dee9de","Type":"ContainerStarted","Data":"d306354fd5d2178f348beb7a119f77d313ccc80e6928076b9869dfc8a33d0edf"} Feb 16 20:57:43.155877 master-0 kubenswrapper[7926]: I0216 20:57:43.155775 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" event={"ID":"b27de289-c0f9-47ff-aac6-15b7bc1b178a","Type":"ContainerStarted","Data":"7836160a631ad4fabd13fade7e117d0a195ed40a8c1f33bde283fef44ab0f21f"} Feb 16 20:57:43.157822 master-0 kubenswrapper[7926]: I0216 20:57:43.157796 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" event={"ID":"a4c9b781-14c0-469c-bb9e-0c3982a04520","Type":"ContainerStarted","Data":"27e39bf106b6e002c0125d685214889286fc25d34ba09141b24632bec0751f4d"} Feb 16 20:57:44.053081 master-0 kubenswrapper[7926]: I0216 20:57:44.053006 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:44.053605 master-0 kubenswrapper[7926]: E0216 20:57:44.053242 7926 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:44.053605 master-0 kubenswrapper[7926]: E0216 20:57:44.053392 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca podName:47a4e44e-ec4a-4e8c-a968-a37c81771bfc nodeName:}" failed. No retries permitted until 2026-02-16 20:57:52.053361746 +0000 UTC m=+43.688262046 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca") pod "controller-manager-56b4b57b4f-5nr85" (UID: "47a4e44e-ec4a-4e8c-a968-a37c81771bfc") : configmap "client-ca" not found Feb 16 20:57:44.164514 master-0 kubenswrapper[7926]: I0216 20:57:44.164455 7926 generic.go:334] "Generic (PLEG): container finished" podID="bd49e653-3b42-4950-8f5f-2b2ecb683678" containerID="68de2e1ab2cad0885d92d9f27ce9e9ae8699ab2a4e1f40736fffa8de720860f7" exitCode=0 Feb 16 20:57:44.164514 master-0 kubenswrapper[7926]: I0216 20:57:44.164503 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" event={"ID":"bd49e653-3b42-4950-8f5f-2b2ecb683678","Type":"ContainerDied","Data":"68de2e1ab2cad0885d92d9f27ce9e9ae8699ab2a4e1f40736fffa8de720860f7"} Feb 16 20:57:45.515358 master-0 kubenswrapper[7926]: I0216 20:57:45.514082 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 20:57:45.515358 master-0 kubenswrapper[7926]: I0216 20:57:45.514727 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:45.527721 master-0 kubenswrapper[7926]: I0216 20:57:45.523097 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 20:57:45.603161 master-0 kubenswrapper[7926]: I0216 20:57:45.603048 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c71d252-827d-49ba-96c8-23137e061411-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:45.603161 master-0 kubenswrapper[7926]: I0216 20:57:45.603171 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-var-lock\") pod \"installer-2-master-0\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:45.603524 master-0 kubenswrapper[7926]: I0216 20:57:45.603219 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:45.704981 master-0 kubenswrapper[7926]: I0216 20:57:45.704503 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-var-lock\") pod \"installer-2-master-0\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:45.704981 master-0 kubenswrapper[7926]: I0216 20:57:45.704579 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:45.704981 master-0 kubenswrapper[7926]: I0216 20:57:45.704751 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-var-lock\") pod \"installer-2-master-0\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:45.704981 master-0 kubenswrapper[7926]: I0216 20:57:45.704883 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c71d252-827d-49ba-96c8-23137e061411-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:45.705483 master-0 kubenswrapper[7926]: I0216 20:57:45.705239 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:45.726288 master-0 kubenswrapper[7926]: I0216 20:57:45.724969 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c71d252-827d-49ba-96c8-23137e061411-kube-api-access\") pod \"installer-2-master-0\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:45.842980 master-0 kubenswrapper[7926]: I0216 20:57:45.842908 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:45.871861 master-0 kubenswrapper[7926]: I0216 20:57:45.871810 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:45.878654 master-0 kubenswrapper[7926]: I0216 20:57:45.878497 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 20:57:46.159132 master-0 kubenswrapper[7926]: I0216 20:57:46.154429 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 20:57:46.200247 master-0 kubenswrapper[7926]: I0216 20:57:46.200166 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-42bw7" event={"ID":"1d453639-52ed-4a14-a2ee-02cf9acc2f7c","Type":"ContainerStarted","Data":"b92936634fddc60909dc2fadd6f7f16c08dc7c7fd8fa03f673db3212a3c8c3fa"} Feb 16 20:57:46.203827 master-0 kubenswrapper[7926]: I0216 20:57:46.203742 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" event={"ID":"b27de289-c0f9-47ff-aac6-15b7bc1b178a","Type":"ContainerStarted","Data":"b6f9bd149e55332060a93dd1c773c869219679c9d52274540dd91f495e731934"} Feb 16 20:57:46.227364 master-0 kubenswrapper[7926]: I0216 20:57:46.227289 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" event={"ID":"bd49e653-3b42-4950-8f5f-2b2ecb683678","Type":"ContainerStarted","Data":"f3fdacbd5a024a974deabef99786f889a735274aa45efb3c455cc2939dd440eb"} Feb 16 20:57:46.230490 master-0 kubenswrapper[7926]: I0216 20:57:46.230065 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" event={"ID":"b28234d1-1d9a-4d9f-9ad1-e3c682bed492","Type":"ContainerStarted","Data":"1fdce62d33ee01800252ab5e608745339a8f0dbc0ccac60559c706daa3409f0f"} Feb 16 20:57:46.230490 master-0 kubenswrapper[7926]: I0216 20:57:46.230123 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:46.232194 master-0 kubenswrapper[7926]: I0216 20:57:46.232160 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" event={"ID":"ec7dd4ea-a139-45d4-96a4-506da1567292","Type":"ContainerStarted","Data":"0af302812fd66c922e290b0e4c9c4e2ba2f2caf5d12a5744d3fbf47817459c17"} Feb 16 20:57:46.234820 master-0 kubenswrapper[7926]: I0216 20:57:46.234772 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 20:57:46.235316 master-0 kubenswrapper[7926]: I0216 20:57:46.235052 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 20:57:46.251937 master-0 kubenswrapper[7926]: I0216 20:57:46.250665 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" podStartSLOduration=4.751240896 podStartE2EDuration="7.250618202s" podCreationTimestamp="2026-02-16 20:57:39 +0000 UTC" firstStartedPulling="2026-02-16 20:57:40.452972189 +0000 UTC m=+32.087872489" lastFinishedPulling="2026-02-16 20:57:42.952349495 +0000 UTC m=+34.587249795" observedRunningTime="2026-02-16 20:57:46.247360399 +0000 UTC m=+37.882260699" watchObservedRunningTime="2026-02-16 20:57:46.250618202 +0000 UTC m=+37.885518502" Feb 16 20:57:46.846318 master-0 kubenswrapper[7926]: I0216 20:57:46.846185 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 20:57:46.924058 master-0 kubenswrapper[7926]: I0216 20:57:46.921809 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 20:57:47.249043 master-0 kubenswrapper[7926]: I0216 20:57:47.248997 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" event={"ID":"b27de289-c0f9-47ff-aac6-15b7bc1b178a","Type":"ContainerStarted","Data":"7e2db6d71a3ac7629c39a027759be84deb42e9801284908e0ecc941bc1381254"} Feb 16 20:57:47.252586 master-0 kubenswrapper[7926]: I0216 20:57:47.252505 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"2c71d252-827d-49ba-96c8-23137e061411","Type":"ContainerStarted","Data":"ad8a25cbbf6b1e6824640df827d36c7c1ac5e5d5bd0d326da8a6d181af508a65"} Feb 16 20:57:47.252586 master-0 kubenswrapper[7926]: I0216 20:57:47.252591 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"2c71d252-827d-49ba-96c8-23137e061411","Type":"ContainerStarted","Data":"e8d809a0cea9275382caa35b9ba6baeef34102d9549f6a13daed4d5addb818b5"} Feb 16 20:57:47.259254 master-0 kubenswrapper[7926]: I0216 20:57:47.259175 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-42bw7" event={"ID":"1d453639-52ed-4a14-a2ee-02cf9acc2f7c","Type":"ContainerStarted","Data":"8bdc75ad4a8097f8c772c54e1b21d47936cb39929b68bda0391b951d52990de1"} Feb 16 20:57:47.263001 master-0 kubenswrapper[7926]: I0216 20:57:47.262910 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 20:57:47.289962 master-0 kubenswrapper[7926]: I0216 20:57:47.288513 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=2.288485178 podStartE2EDuration="2.288485178s" podCreationTimestamp="2026-02-16 20:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:47.281859839 +0000 UTC m=+38.916760149" watchObservedRunningTime="2026-02-16 20:57:47.288485178 +0000 UTC m=+38.923385478" Feb 16 20:57:47.972961 master-0 kubenswrapper[7926]: I0216 20:57:47.972896 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-7bbrn" Feb 16 20:57:49.010342 master-0 kubenswrapper[7926]: I0216 20:57:49.010283 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 20:57:49.986092 master-0 kubenswrapper[7926]: I0216 20:57:49.985948 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca\") pod \"route-controller-manager-89c945d44-2smzj\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:49.986404 master-0 kubenswrapper[7926]: E0216 20:57:49.986165 7926 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 20:57:49.986404 master-0 kubenswrapper[7926]: E0216 20:57:49.986286 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca podName:34eb2829-2e5d-455d-9218-cad202a49e30 nodeName:}" failed. No retries permitted until 2026-02-16 20:58:05.986260106 +0000 UTC m=+57.621160466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca") pod "route-controller-manager-89c945d44-2smzj" (UID: "34eb2829-2e5d-455d-9218-cad202a49e30") : configmap "client-ca" not found Feb 16 20:57:50.178805 master-0 kubenswrapper[7926]: I0216 20:57:50.178679 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:50.178805 master-0 kubenswrapper[7926]: I0216 20:57:50.178760 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:50.194903 master-0 kubenswrapper[7926]: I0216 20:57:50.194865 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:50.279860 master-0 kubenswrapper[7926]: I0216 20:57:50.279796 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 20:57:51.057576 master-0 kubenswrapper[7926]: I0216 20:57:51.057528 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 20:57:51.058094 master-0 kubenswrapper[7926]: I0216 20:57:51.058076 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:57:51.060456 master-0 kubenswrapper[7926]: I0216 20:57:51.060425 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 20:57:51.097535 master-0 kubenswrapper[7926]: I0216 20:57:51.097469 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 20:57:51.145695 master-0 kubenswrapper[7926]: I0216 20:57:51.101949 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b09d3c16-18e3-45b3-9d39-949d2464b300-kube-api-access\") pod \"installer-1-master-0\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:57:51.145695 master-0 kubenswrapper[7926]: I0216 20:57:51.101991 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-var-lock\") pod \"installer-1-master-0\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:57:51.145695 master-0 kubenswrapper[7926]: I0216 20:57:51.102019 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:57:51.203001 master-0 kubenswrapper[7926]: I0216 20:57:51.202937 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b09d3c16-18e3-45b3-9d39-949d2464b300-kube-api-access\") pod \"installer-1-master-0\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:57:51.203001 master-0 kubenswrapper[7926]: I0216 20:57:51.203002 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-var-lock\") pod \"installer-1-master-0\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:57:51.203001 master-0 kubenswrapper[7926]: I0216 20:57:51.203033 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:57:51.203001 master-0 kubenswrapper[7926]: I0216 20:57:51.203171 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:57:51.203001 master-0 kubenswrapper[7926]: I0216 20:57:51.203480 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-var-lock\") pod \"installer-1-master-0\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:57:51.231407 master-0 kubenswrapper[7926]: I0216 20:57:51.231340 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b09d3c16-18e3-45b3-9d39-949d2464b300-kube-api-access\") pod \"installer-1-master-0\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:57:51.384474 master-0 kubenswrapper[7926]: I0216 20:57:51.384334 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:57:51.471864 master-0 kubenswrapper[7926]: I0216 20:57:51.471797 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl"] Feb 16 20:57:51.473082 master-0 kubenswrapper[7926]: I0216 20:57:51.473054 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 20:57:51.478233 master-0 kubenswrapper[7926]: I0216 20:57:51.477640 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 20:57:51.479454 master-0 kubenswrapper[7926]: I0216 20:57:51.478442 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 20:57:51.481387 master-0 kubenswrapper[7926]: I0216 20:57:51.481351 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl"] Feb 16 20:57:51.482858 master-0 kubenswrapper[7926]: I0216 20:57:51.482806 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 20:57:51.613348 master-0 kubenswrapper[7926]: I0216 20:57:51.612716 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/302156cc-9dca-4a66-9e6a-ba2c7e738c92-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-8pqbl\" (UID: \"302156cc-9dca-4a66-9e6a-ba2c7e738c92\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 20:57:51.613348 master-0 kubenswrapper[7926]: I0216 20:57:51.612890 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxcg6\" (UniqueName: \"kubernetes.io/projected/302156cc-9dca-4a66-9e6a-ba2c7e738c92-kube-api-access-zxcg6\") pod \"control-plane-machine-set-operator-d8bf84b88-8pqbl\" (UID: \"302156cc-9dca-4a66-9e6a-ba2c7e738c92\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 20:57:51.720010 master-0 kubenswrapper[7926]: I0216 20:57:51.719657 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/302156cc-9dca-4a66-9e6a-ba2c7e738c92-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-8pqbl\" (UID: \"302156cc-9dca-4a66-9e6a-ba2c7e738c92\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 20:57:51.720010 master-0 kubenswrapper[7926]: I0216 20:57:51.719759 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxcg6\" (UniqueName: \"kubernetes.io/projected/302156cc-9dca-4a66-9e6a-ba2c7e738c92-kube-api-access-zxcg6\") pod \"control-plane-machine-set-operator-d8bf84b88-8pqbl\" (UID: \"302156cc-9dca-4a66-9e6a-ba2c7e738c92\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 20:57:51.726528 master-0 kubenswrapper[7926]: I0216 20:57:51.726378 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/302156cc-9dca-4a66-9e6a-ba2c7e738c92-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-8pqbl\" (UID: \"302156cc-9dca-4a66-9e6a-ba2c7e738c92\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 20:57:51.739746 master-0 kubenswrapper[7926]: I0216 20:57:51.739700 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxcg6\" (UniqueName: \"kubernetes.io/projected/302156cc-9dca-4a66-9e6a-ba2c7e738c92-kube-api-access-zxcg6\") pod \"control-plane-machine-set-operator-d8bf84b88-8pqbl\" (UID: \"302156cc-9dca-4a66-9e6a-ba2c7e738c92\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 20:57:51.800415 master-0 kubenswrapper[7926]: I0216 20:57:51.800357 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 20:57:51.967241 master-0 kubenswrapper[7926]: I0216 20:57:51.966952 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56b4b57b4f-5nr85"] Feb 16 20:57:51.967475 master-0 kubenswrapper[7926]: E0216 20:57:51.967336 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" podUID="47a4e44e-ec4a-4e8c-a968-a37c81771bfc" Feb 16 20:57:52.008958 master-0 kubenswrapper[7926]: I0216 20:57:52.008459 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj"] Feb 16 20:57:52.008958 master-0 kubenswrapper[7926]: E0216 20:57:52.008896 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" podUID="34eb2829-2e5d-455d-9218-cad202a49e30" Feb 16 20:57:52.027111 master-0 kubenswrapper[7926]: I0216 20:57:52.026186 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 20:57:52.054236 master-0 kubenswrapper[7926]: W0216 20:57:52.053914 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb09d3c16_18e3_45b3_9d39_949d2464b300.slice/crio-a1a7ba08e2cc5089762afc7ce295fbadf271a58f2006a34cf3be8f3b16ca4e70 WatchSource:0}: Error finding container a1a7ba08e2cc5089762afc7ce295fbadf271a58f2006a34cf3be8f3b16ca4e70: Status 404 returned error can't find the container with id a1a7ba08e2cc5089762afc7ce295fbadf271a58f2006a34cf3be8f3b16ca4e70 Feb 16 20:57:52.125332 master-0 kubenswrapper[7926]: I0216 20:57:52.125242 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:52.126430 master-0 kubenswrapper[7926]: I0216 20:57:52.126389 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca\") pod \"controller-manager-56b4b57b4f-5nr85\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:52.287756 master-0 kubenswrapper[7926]: I0216 20:57:52.287703 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" event={"ID":"a4c9b781-14c0-469c-bb9e-0c3982a04520","Type":"ContainerStarted","Data":"6040ea1798d8d929c837e96747106c868fc9107367ded8384ee5318d0125dfe3"} Feb 16 20:57:52.288189 master-0 kubenswrapper[7926]: I0216 20:57:52.288113 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:52.293262 master-0 kubenswrapper[7926]: I0216 20:57:52.293216 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" event={"ID":"2e618c5c-52be-4b52-b426-b92555dee9de","Type":"ContainerStarted","Data":"89713a48ebda2d81dc73c8e6307d140eac3f186d0e349480425338bd881c9d90"} Feb 16 20:57:52.294197 master-0 kubenswrapper[7926]: I0216 20:57:52.294173 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:52.298051 master-0 kubenswrapper[7926]: I0216 20:57:52.297994 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 20:57:52.298570 master-0 kubenswrapper[7926]: I0216 20:57:52.298513 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"b09d3c16-18e3-45b3-9d39-949d2464b300","Type":"ContainerStarted","Data":"a1a7ba08e2cc5089762afc7ce295fbadf271a58f2006a34cf3be8f3b16ca4e70"} Feb 16 20:57:52.307623 master-0 kubenswrapper[7926]: I0216 20:57:52.305469 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 20:57:52.308657 master-0 kubenswrapper[7926]: I0216 20:57:52.308595 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:52.308728 master-0 kubenswrapper[7926]: I0216 20:57:52.308682 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:52.308846 master-0 kubenswrapper[7926]: I0216 20:57:52.308586 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" event={"ID":"4b035e85-b2b0-4dee-bb86-3465fc4b98a8","Type":"ContainerStarted","Data":"95cb75164641c9de6a0109a60c606bf650f57a11a7796ffdbcb05ca7aa385e4c"} Feb 16 20:57:52.320080 master-0 kubenswrapper[7926]: I0216 20:57:52.317231 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl"] Feb 16 20:57:52.346560 master-0 kubenswrapper[7926]: I0216 20:57:52.346516 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:52.358587 master-0 kubenswrapper[7926]: I0216 20:57:52.358529 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:52.428962 master-0 kubenswrapper[7926]: I0216 20:57:52.428921 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-serving-cert\") pod \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " Feb 16 20:57:52.429145 master-0 kubenswrapper[7926]: I0216 20:57:52.429013 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34eb2829-2e5d-455d-9218-cad202a49e30-serving-cert\") pod \"34eb2829-2e5d-455d-9218-cad202a49e30\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " Feb 16 20:57:52.429145 master-0 kubenswrapper[7926]: I0216 20:57:52.429042 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca\") pod \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " Feb 16 20:57:52.429145 master-0 kubenswrapper[7926]: I0216 20:57:52.429092 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-config\") pod \"34eb2829-2e5d-455d-9218-cad202a49e30\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " Feb 16 20:57:52.429145 master-0 kubenswrapper[7926]: I0216 20:57:52.429112 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-config\") pod \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " Feb 16 20:57:52.429311 master-0 kubenswrapper[7926]: I0216 20:57:52.429177 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsqnr\" (UniqueName: \"kubernetes.io/projected/34eb2829-2e5d-455d-9218-cad202a49e30-kube-api-access-fsqnr\") pod \"34eb2829-2e5d-455d-9218-cad202a49e30\" (UID: \"34eb2829-2e5d-455d-9218-cad202a49e30\") " Feb 16 20:57:52.429311 master-0 kubenswrapper[7926]: I0216 20:57:52.429270 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk65s\" (UniqueName: \"kubernetes.io/projected/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-kube-api-access-jk65s\") pod \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " Feb 16 20:57:52.429390 master-0 kubenswrapper[7926]: I0216 20:57:52.429330 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-proxy-ca-bundles\") pod \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\" (UID: \"47a4e44e-ec4a-4e8c-a968-a37c81771bfc\") " Feb 16 20:57:52.430192 master-0 kubenswrapper[7926]: I0216 20:57:52.430160 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-config" (OuterVolumeSpecName: "config") pod "47a4e44e-ec4a-4e8c-a968-a37c81771bfc" (UID: "47a4e44e-ec4a-4e8c-a968-a37c81771bfc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:52.430269 master-0 kubenswrapper[7926]: I0216 20:57:52.430203 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-config" (OuterVolumeSpecName: "config") pod "34eb2829-2e5d-455d-9218-cad202a49e30" (UID: "34eb2829-2e5d-455d-9218-cad202a49e30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:52.430416 master-0 kubenswrapper[7926]: I0216 20:57:52.430386 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-config\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:52.430416 master-0 kubenswrapper[7926]: I0216 20:57:52.430409 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-config\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:52.430569 master-0 kubenswrapper[7926]: I0216 20:57:52.430500 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "47a4e44e-ec4a-4e8c-a968-a37c81771bfc" (UID: "47a4e44e-ec4a-4e8c-a968-a37c81771bfc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:52.430699 master-0 kubenswrapper[7926]: I0216 20:57:52.430634 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca" (OuterVolumeSpecName: "client-ca") pod "47a4e44e-ec4a-4e8c-a968-a37c81771bfc" (UID: "47a4e44e-ec4a-4e8c-a968-a37c81771bfc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 20:57:52.431959 master-0 kubenswrapper[7926]: I0216 20:57:52.431916 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "47a4e44e-ec4a-4e8c-a968-a37c81771bfc" (UID: "47a4e44e-ec4a-4e8c-a968-a37c81771bfc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:57:52.432556 master-0 kubenswrapper[7926]: I0216 20:57:52.432524 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-kube-api-access-jk65s" (OuterVolumeSpecName: "kube-api-access-jk65s") pod "47a4e44e-ec4a-4e8c-a968-a37c81771bfc" (UID: "47a4e44e-ec4a-4e8c-a968-a37c81771bfc"). InnerVolumeSpecName "kube-api-access-jk65s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:57:52.433422 master-0 kubenswrapper[7926]: I0216 20:57:52.433379 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34eb2829-2e5d-455d-9218-cad202a49e30-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "34eb2829-2e5d-455d-9218-cad202a49e30" (UID: "34eb2829-2e5d-455d-9218-cad202a49e30"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 20:57:52.433490 master-0 kubenswrapper[7926]: I0216 20:57:52.433464 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34eb2829-2e5d-455d-9218-cad202a49e30-kube-api-access-fsqnr" (OuterVolumeSpecName: "kube-api-access-fsqnr") pod "34eb2829-2e5d-455d-9218-cad202a49e30" (UID: "34eb2829-2e5d-455d-9218-cad202a49e30"). InnerVolumeSpecName "kube-api-access-fsqnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:57:52.531536 master-0 kubenswrapper[7926]: I0216 20:57:52.531472 7926 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:52.531536 master-0 kubenswrapper[7926]: I0216 20:57:52.531517 7926 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:52.531536 master-0 kubenswrapper[7926]: I0216 20:57:52.531526 7926 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34eb2829-2e5d-455d-9218-cad202a49e30-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:52.531536 master-0 kubenswrapper[7926]: I0216 20:57:52.531537 7926 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:52.531536 master-0 kubenswrapper[7926]: I0216 20:57:52.531547 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsqnr\" (UniqueName: \"kubernetes.io/projected/34eb2829-2e5d-455d-9218-cad202a49e30-kube-api-access-fsqnr\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:52.531536 master-0 kubenswrapper[7926]: I0216 20:57:52.531558 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk65s\" (UniqueName: \"kubernetes.io/projected/47a4e44e-ec4a-4e8c-a968-a37c81771bfc-kube-api-access-jk65s\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:52.979376 master-0 kubenswrapper[7926]: I0216 20:57:52.979192 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b8vtc"] Feb 16 20:57:52.980017 master-0 kubenswrapper[7926]: I0216 20:57:52.979981 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:57:52.996144 master-0 kubenswrapper[7926]: I0216 20:57:52.996083 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b8vtc"] Feb 16 20:57:53.038062 master-0 kubenswrapper[7926]: I0216 20:57:53.037968 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-utilities\") pod \"certified-operators-b8vtc\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:57:53.038062 master-0 kubenswrapper[7926]: I0216 20:57:53.038041 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jppbr\" (UniqueName: \"kubernetes.io/projected/03593410-baa5-4edb-9d73-242a74f82987-kube-api-access-jppbr\") pod \"certified-operators-b8vtc\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:57:53.038062 master-0 kubenswrapper[7926]: I0216 20:57:53.038072 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-catalog-content\") pod \"certified-operators-b8vtc\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:57:53.116448 master-0 kubenswrapper[7926]: I0216 20:57:53.116385 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 20:57:53.117052 master-0 kubenswrapper[7926]: I0216 20:57:53.116607 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="2c71d252-827d-49ba-96c8-23137e061411" containerName="installer" containerID="cri-o://ad8a25cbbf6b1e6824640df827d36c7c1ac5e5d5bd0d326da8a6d181af508a65" gracePeriod=30 Feb 16 20:57:53.139660 master-0 kubenswrapper[7926]: I0216 20:57:53.139591 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-utilities\") pod \"certified-operators-b8vtc\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:57:53.139843 master-0 kubenswrapper[7926]: I0216 20:57:53.139662 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jppbr\" (UniqueName: \"kubernetes.io/projected/03593410-baa5-4edb-9d73-242a74f82987-kube-api-access-jppbr\") pod \"certified-operators-b8vtc\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:57:53.139843 master-0 kubenswrapper[7926]: I0216 20:57:53.139700 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-catalog-content\") pod \"certified-operators-b8vtc\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:57:53.140279 master-0 kubenswrapper[7926]: I0216 20:57:53.140250 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-catalog-content\") pod \"certified-operators-b8vtc\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:57:53.140563 master-0 kubenswrapper[7926]: I0216 20:57:53.140533 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-utilities\") pod \"certified-operators-b8vtc\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:57:53.165155 master-0 kubenswrapper[7926]: I0216 20:57:53.165098 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jppbr\" (UniqueName: \"kubernetes.io/projected/03593410-baa5-4edb-9d73-242a74f82987-kube-api-access-jppbr\") pod \"certified-operators-b8vtc\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:57:53.184841 master-0 kubenswrapper[7926]: I0216 20:57:53.183287 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xv645"] Feb 16 20:57:53.186935 master-0 kubenswrapper[7926]: I0216 20:57:53.186749 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xv645" Feb 16 20:57:53.204021 master-0 kubenswrapper[7926]: I0216 20:57:53.203972 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xv645"] Feb 16 20:57:53.240805 master-0 kubenswrapper[7926]: I0216 20:57:53.240678 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-utilities\") pod \"community-operators-xv645\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " pod="openshift-marketplace/community-operators-xv645" Feb 16 20:57:53.240805 master-0 kubenswrapper[7926]: I0216 20:57:53.240764 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrtld\" (UniqueName: \"kubernetes.io/projected/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-kube-api-access-hrtld\") pod \"community-operators-xv645\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " pod="openshift-marketplace/community-operators-xv645" Feb 16 20:57:53.241012 master-0 kubenswrapper[7926]: I0216 20:57:53.240808 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-catalog-content\") pod \"community-operators-xv645\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " pod="openshift-marketplace/community-operators-xv645" Feb 16 20:57:53.296576 master-0 kubenswrapper[7926]: I0216 20:57:53.296152 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:57:53.315971 master-0 kubenswrapper[7926]: I0216 20:57:53.315190 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"b09d3c16-18e3-45b3-9d39-949d2464b300","Type":"ContainerStarted","Data":"ab3f1bdaa87534b4aa1ea4a058dea3457c695cfe1da23ed41ae2ee089315bd08"} Feb 16 20:57:53.318980 master-0 kubenswrapper[7926]: I0216 20:57:53.318933 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_2c71d252-827d-49ba-96c8-23137e061411/installer/0.log" Feb 16 20:57:53.319104 master-0 kubenswrapper[7926]: I0216 20:57:53.319008 7926 generic.go:334] "Generic (PLEG): container finished" podID="2c71d252-827d-49ba-96c8-23137e061411" containerID="ad8a25cbbf6b1e6824640df827d36c7c1ac5e5d5bd0d326da8a6d181af508a65" exitCode=1 Feb 16 20:57:53.319172 master-0 kubenswrapper[7926]: I0216 20:57:53.319137 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"2c71d252-827d-49ba-96c8-23137e061411","Type":"ContainerDied","Data":"ad8a25cbbf6b1e6824640df827d36c7c1ac5e5d5bd0d326da8a6d181af508a65"} Feb 16 20:57:53.320590 master-0 kubenswrapper[7926]: I0216 20:57:53.320201 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" event={"ID":"302156cc-9dca-4a66-9e6a-ba2c7e738c92","Type":"ContainerStarted","Data":"d2b7935cea946c9f051bb808d0bcec166c533127cc006510308f2ece80cabd7f"} Feb 16 20:57:53.320590 master-0 kubenswrapper[7926]: I0216 20:57:53.320297 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56b4b57b4f-5nr85" Feb 16 20:57:53.320937 master-0 kubenswrapper[7926]: I0216 20:57:53.320902 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj" Feb 16 20:57:53.322269 master-0 kubenswrapper[7926]: I0216 20:57:53.322091 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:57:53.342739 master-0 kubenswrapper[7926]: I0216 20:57:53.342635 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-utilities\") pod \"community-operators-xv645\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " pod="openshift-marketplace/community-operators-xv645" Feb 16 20:57:53.342739 master-0 kubenswrapper[7926]: I0216 20:57:53.342739 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrtld\" (UniqueName: \"kubernetes.io/projected/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-kube-api-access-hrtld\") pod \"community-operators-xv645\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " pod="openshift-marketplace/community-operators-xv645" Feb 16 20:57:53.342983 master-0 kubenswrapper[7926]: I0216 20:57:53.342779 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-catalog-content\") pod \"community-operators-xv645\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " pod="openshift-marketplace/community-operators-xv645" Feb 16 20:57:53.343376 master-0 kubenswrapper[7926]: I0216 20:57:53.343324 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-utilities\") pod \"community-operators-xv645\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " pod="openshift-marketplace/community-operators-xv645" Feb 16 20:57:53.347591 master-0 kubenswrapper[7926]: I0216 20:57:53.347511 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=2.347483964 podStartE2EDuration="2.347483964s" podCreationTimestamp="2026-02-16 20:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:53.345432095 +0000 UTC m=+44.980332425" watchObservedRunningTime="2026-02-16 20:57:53.347483964 +0000 UTC m=+44.982384264" Feb 16 20:57:53.347785 master-0 kubenswrapper[7926]: I0216 20:57:53.347592 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-catalog-content\") pod \"community-operators-xv645\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " pod="openshift-marketplace/community-operators-xv645" Feb 16 20:57:53.363224 master-0 kubenswrapper[7926]: I0216 20:57:53.363171 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrtld\" (UniqueName: \"kubernetes.io/projected/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-kube-api-access-hrtld\") pod \"community-operators-xv645\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " pod="openshift-marketplace/community-operators-xv645" Feb 16 20:57:53.410352 master-0 kubenswrapper[7926]: I0216 20:57:53.410096 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c6548b89f-s8dv7"] Feb 16 20:57:53.410945 master-0 kubenswrapper[7926]: I0216 20:57:53.410918 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.416288 master-0 kubenswrapper[7926]: I0216 20:57:53.414785 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56b4b57b4f-5nr85"] Feb 16 20:57:53.416288 master-0 kubenswrapper[7926]: I0216 20:57:53.415304 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56b4b57b4f-5nr85"] Feb 16 20:57:53.434667 master-0 kubenswrapper[7926]: I0216 20:57:53.434596 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 20:57:53.434926 master-0 kubenswrapper[7926]: I0216 20:57:53.434903 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 20:57:53.435083 master-0 kubenswrapper[7926]: I0216 20:57:53.435057 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 20:57:53.443705 master-0 kubenswrapper[7926]: I0216 20:57:53.435412 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 20:57:53.443705 master-0 kubenswrapper[7926]: I0216 20:57:53.435503 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 20:57:53.443705 master-0 kubenswrapper[7926]: I0216 20:57:53.436881 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c6548b89f-s8dv7"] Feb 16 20:57:53.451841 master-0 kubenswrapper[7926]: I0216 20:57:53.451637 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 20:57:53.485134 master-0 kubenswrapper[7926]: I0216 20:57:53.485083 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj"] Feb 16 20:57:53.488287 master-0 kubenswrapper[7926]: I0216 20:57:53.488250 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj"] Feb 16 20:57:53.546089 master-0 kubenswrapper[7926]: I0216 20:57:53.546011 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zv2b\" (UniqueName: \"kubernetes.io/projected/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-kube-api-access-7zv2b\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.546089 master-0 kubenswrapper[7926]: I0216 20:57:53.546068 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-proxy-ca-bundles\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.546089 master-0 kubenswrapper[7926]: I0216 20:57:53.546106 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-serving-cert\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.546414 master-0 kubenswrapper[7926]: I0216 20:57:53.546141 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-client-ca\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.546414 master-0 kubenswrapper[7926]: I0216 20:57:53.546166 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-config\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.546414 master-0 kubenswrapper[7926]: I0216 20:57:53.546245 7926 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34eb2829-2e5d-455d-9218-cad202a49e30-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:53.563529 master-0 kubenswrapper[7926]: I0216 20:57:53.563029 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xv645" Feb 16 20:57:53.609380 master-0 kubenswrapper[7926]: I0216 20:57:53.609339 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_2c71d252-827d-49ba-96c8-23137e061411/installer/0.log" Feb 16 20:57:53.609584 master-0 kubenswrapper[7926]: I0216 20:57:53.609416 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:53.649320 master-0 kubenswrapper[7926]: I0216 20:57:53.647562 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-config\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.649320 master-0 kubenswrapper[7926]: I0216 20:57:53.647659 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zv2b\" (UniqueName: \"kubernetes.io/projected/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-kube-api-access-7zv2b\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.649320 master-0 kubenswrapper[7926]: I0216 20:57:53.647697 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-proxy-ca-bundles\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.649320 master-0 kubenswrapper[7926]: I0216 20:57:53.647741 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-serving-cert\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.649320 master-0 kubenswrapper[7926]: I0216 20:57:53.647777 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-client-ca\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.649320 master-0 kubenswrapper[7926]: I0216 20:57:53.648904 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-client-ca\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.651185 master-0 kubenswrapper[7926]: I0216 20:57:53.651056 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-config\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.651581 master-0 kubenswrapper[7926]: I0216 20:57:53.651526 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-proxy-ca-bundles\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.654018 master-0 kubenswrapper[7926]: I0216 20:57:53.653964 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-serving-cert\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.674847 master-0 kubenswrapper[7926]: I0216 20:57:53.672765 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zv2b\" (UniqueName: \"kubernetes.io/projected/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-kube-api-access-7zv2b\") pod \"controller-manager-7c6548b89f-s8dv7\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.730743 master-0 kubenswrapper[7926]: I0216 20:57:53.730352 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:57:53.749130 master-0 kubenswrapper[7926]: I0216 20:57:53.749069 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c71d252-827d-49ba-96c8-23137e061411-kube-api-access\") pod \"2c71d252-827d-49ba-96c8-23137e061411\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " Feb 16 20:57:53.749503 master-0 kubenswrapper[7926]: I0216 20:57:53.749160 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-var-lock\") pod \"2c71d252-827d-49ba-96c8-23137e061411\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " Feb 16 20:57:53.749503 master-0 kubenswrapper[7926]: I0216 20:57:53.749315 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-kubelet-dir\") pod \"2c71d252-827d-49ba-96c8-23137e061411\" (UID: \"2c71d252-827d-49ba-96c8-23137e061411\") " Feb 16 20:57:53.749503 master-0 kubenswrapper[7926]: I0216 20:57:53.749349 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-var-lock" (OuterVolumeSpecName: "var-lock") pod "2c71d252-827d-49ba-96c8-23137e061411" (UID: "2c71d252-827d-49ba-96c8-23137e061411"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:57:53.749641 master-0 kubenswrapper[7926]: I0216 20:57:53.749524 7926 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:53.749641 master-0 kubenswrapper[7926]: I0216 20:57:53.749525 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2c71d252-827d-49ba-96c8-23137e061411" (UID: "2c71d252-827d-49ba-96c8-23137e061411"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:57:53.752534 master-0 kubenswrapper[7926]: I0216 20:57:53.752487 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c71d252-827d-49ba-96c8-23137e061411-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2c71d252-827d-49ba-96c8-23137e061411" (UID: "2c71d252-827d-49ba-96c8-23137e061411"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:57:53.759914 master-0 kubenswrapper[7926]: I0216 20:57:53.759872 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b8vtc"] Feb 16 20:57:53.850773 master-0 kubenswrapper[7926]: I0216 20:57:53.850715 7926 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c71d252-827d-49ba-96c8-23137e061411-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:53.850773 master-0 kubenswrapper[7926]: I0216 20:57:53.850753 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2c71d252-827d-49ba-96c8-23137e061411-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 20:57:53.971773 master-0 kubenswrapper[7926]: I0216 20:57:53.970248 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xv645"] Feb 16 20:57:53.978081 master-0 kubenswrapper[7926]: W0216 20:57:53.978021 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97ec2c8c_e32c_4d18_ad78_0ef1f19557af.slice/crio-2079e16eb1d12b11a9c3315a75882203ebe24ec85035afd7621338cd504578d4 WatchSource:0}: Error finding container 2079e16eb1d12b11a9c3315a75882203ebe24ec85035afd7621338cd504578d4: Status 404 returned error can't find the container with id 2079e16eb1d12b11a9c3315a75882203ebe24ec85035afd7621338cd504578d4 Feb 16 20:57:54.290756 master-0 kubenswrapper[7926]: I0216 20:57:54.289882 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c6548b89f-s8dv7"] Feb 16 20:57:54.341610 master-0 kubenswrapper[7926]: I0216 20:57:54.341566 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_2c71d252-827d-49ba-96c8-23137e061411/installer/0.log" Feb 16 20:57:54.342308 master-0 kubenswrapper[7926]: I0216 20:57:54.341703 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"2c71d252-827d-49ba-96c8-23137e061411","Type":"ContainerDied","Data":"e8d809a0cea9275382caa35b9ba6baeef34102d9549f6a13daed4d5addb818b5"} Feb 16 20:57:54.342308 master-0 kubenswrapper[7926]: I0216 20:57:54.341745 7926 scope.go:117] "RemoveContainer" containerID="ad8a25cbbf6b1e6824640df827d36c7c1ac5e5d5bd0d326da8a6d181af508a65" Feb 16 20:57:54.342308 master-0 kubenswrapper[7926]: I0216 20:57:54.341869 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 20:57:54.349278 master-0 kubenswrapper[7926]: I0216 20:57:54.349221 7926 generic.go:334] "Generic (PLEG): container finished" podID="03593410-baa5-4edb-9d73-242a74f82987" containerID="df640a25b3ddb3199360ab01328f62e3d346f3e50e79a2d6fa8fbf82c9ea5172" exitCode=0 Feb 16 20:57:54.349278 master-0 kubenswrapper[7926]: I0216 20:57:54.349266 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8vtc" event={"ID":"03593410-baa5-4edb-9d73-242a74f82987","Type":"ContainerDied","Data":"df640a25b3ddb3199360ab01328f62e3d346f3e50e79a2d6fa8fbf82c9ea5172"} Feb 16 20:57:54.349404 master-0 kubenswrapper[7926]: I0216 20:57:54.349308 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8vtc" event={"ID":"03593410-baa5-4edb-9d73-242a74f82987","Type":"ContainerStarted","Data":"bcbf76c12e0a665429c3b7495c8c421337d9ebf01882b382cef96d39701094b1"} Feb 16 20:57:54.351556 master-0 kubenswrapper[7926]: I0216 20:57:54.350746 7926 generic.go:334] "Generic (PLEG): container finished" podID="97ec2c8c-e32c-4d18-ad78-0ef1f19557af" containerID="1291ada8598d43ef0cbbde81989f1e8de61f7c3c643ca6fbf77da577e15fdf5b" exitCode=0 Feb 16 20:57:54.351556 master-0 kubenswrapper[7926]: I0216 20:57:54.350787 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xv645" event={"ID":"97ec2c8c-e32c-4d18-ad78-0ef1f19557af","Type":"ContainerDied","Data":"1291ada8598d43ef0cbbde81989f1e8de61f7c3c643ca6fbf77da577e15fdf5b"} Feb 16 20:57:54.351556 master-0 kubenswrapper[7926]: I0216 20:57:54.350815 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xv645" event={"ID":"97ec2c8c-e32c-4d18-ad78-0ef1f19557af","Type":"ContainerStarted","Data":"2079e16eb1d12b11a9c3315a75882203ebe24ec85035afd7621338cd504578d4"} Feb 16 20:57:54.401193 master-0 kubenswrapper[7926]: I0216 20:57:54.401125 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 20:57:54.403300 master-0 kubenswrapper[7926]: I0216 20:57:54.403181 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 20:57:54.583155 master-0 kubenswrapper[7926]: I0216 20:57:54.581270 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w2lj6"] Feb 16 20:57:54.583155 master-0 kubenswrapper[7926]: E0216 20:57:54.581544 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c71d252-827d-49ba-96c8-23137e061411" containerName="installer" Feb 16 20:57:54.583155 master-0 kubenswrapper[7926]: I0216 20:57:54.581560 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c71d252-827d-49ba-96c8-23137e061411" containerName="installer" Feb 16 20:57:54.583155 master-0 kubenswrapper[7926]: I0216 20:57:54.581685 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c71d252-827d-49ba-96c8-23137e061411" containerName="installer" Feb 16 20:57:54.583155 master-0 kubenswrapper[7926]: I0216 20:57:54.582440 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:57:54.593791 master-0 kubenswrapper[7926]: I0216 20:57:54.593751 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2lj6"] Feb 16 20:57:54.663564 master-0 kubenswrapper[7926]: I0216 20:57:54.663195 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-catalog-content\") pod \"redhat-marketplace-w2lj6\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:57:54.663564 master-0 kubenswrapper[7926]: I0216 20:57:54.663243 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-utilities\") pod \"redhat-marketplace-w2lj6\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:57:54.663564 master-0 kubenswrapper[7926]: I0216 20:57:54.663313 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9l69\" (UniqueName: \"kubernetes.io/projected/d4a6dcba-776f-48ba-b824-90ed5ae3abee-kube-api-access-l9l69\") pod \"redhat-marketplace-w2lj6\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:57:54.747723 master-0 kubenswrapper[7926]: I0216 20:57:54.747604 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c71d252-827d-49ba-96c8-23137e061411" path="/var/lib/kubelet/pods/2c71d252-827d-49ba-96c8-23137e061411/volumes" Feb 16 20:57:54.748882 master-0 kubenswrapper[7926]: I0216 20:57:54.748831 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34eb2829-2e5d-455d-9218-cad202a49e30" path="/var/lib/kubelet/pods/34eb2829-2e5d-455d-9218-cad202a49e30/volumes" Feb 16 20:57:54.749901 master-0 kubenswrapper[7926]: I0216 20:57:54.749852 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a4e44e-ec4a-4e8c-a968-a37c81771bfc" path="/var/lib/kubelet/pods/47a4e44e-ec4a-4e8c-a968-a37c81771bfc/volumes" Feb 16 20:57:54.764211 master-0 kubenswrapper[7926]: I0216 20:57:54.764155 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9l69\" (UniqueName: \"kubernetes.io/projected/d4a6dcba-776f-48ba-b824-90ed5ae3abee-kube-api-access-l9l69\") pod \"redhat-marketplace-w2lj6\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:57:54.764291 master-0 kubenswrapper[7926]: I0216 20:57:54.764260 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-catalog-content\") pod \"redhat-marketplace-w2lj6\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:57:54.764321 master-0 kubenswrapper[7926]: I0216 20:57:54.764297 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-utilities\") pod \"redhat-marketplace-w2lj6\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:57:54.764905 master-0 kubenswrapper[7926]: I0216 20:57:54.764861 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-catalog-content\") pod \"redhat-marketplace-w2lj6\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:57:54.766082 master-0 kubenswrapper[7926]: I0216 20:57:54.766052 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-utilities\") pod \"redhat-marketplace-w2lj6\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:57:54.779140 master-0 kubenswrapper[7926]: I0216 20:57:54.779084 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9l69\" (UniqueName: \"kubernetes.io/projected/d4a6dcba-776f-48ba-b824-90ed5ae3abee-kube-api-access-l9l69\") pod \"redhat-marketplace-w2lj6\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:57:54.957060 master-0 kubenswrapper[7926]: I0216 20:57:54.956972 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:57:55.130669 master-0 kubenswrapper[7926]: I0216 20:57:55.126886 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q"] Feb 16 20:57:55.130669 master-0 kubenswrapper[7926]: I0216 20:57:55.127568 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.133759 master-0 kubenswrapper[7926]: I0216 20:57:55.130975 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 20:57:55.133759 master-0 kubenswrapper[7926]: I0216 20:57:55.131380 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 20:57:55.133759 master-0 kubenswrapper[7926]: I0216 20:57:55.132819 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 20:57:55.133759 master-0 kubenswrapper[7926]: I0216 20:57:55.133032 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 20:57:55.133759 master-0 kubenswrapper[7926]: I0216 20:57:55.133151 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 20:57:55.269873 master-0 kubenswrapper[7926]: I0216 20:57:55.269801 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-config\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.269873 master-0 kubenswrapper[7926]: I0216 20:57:55.269855 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qqkf\" (UniqueName: \"kubernetes.io/projected/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-kube-api-access-4qqkf\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.269873 master-0 kubenswrapper[7926]: I0216 20:57:55.269887 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-auth-proxy-config\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.270219 master-0 kubenswrapper[7926]: I0216 20:57:55.269920 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-machine-approver-tls\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.358085 master-0 kubenswrapper[7926]: I0216 20:57:55.357906 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" event={"ID":"302156cc-9dca-4a66-9e6a-ba2c7e738c92","Type":"ContainerStarted","Data":"03d8daaa264d52b607ef3a2e1ee4da18d94e4e7433715288335ef0a92bd90db1"} Feb 16 20:57:55.358981 master-0 kubenswrapper[7926]: I0216 20:57:55.358908 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" event={"ID":"57b94ed4-8f0b-4223-bdaf-4316859d8ad3","Type":"ContainerStarted","Data":"b1181fe67b605ba3682cb72aadab485f579f30f6cec1251b516fac8e19f9c298"} Feb 16 20:57:55.371394 master-0 kubenswrapper[7926]: I0216 20:57:55.371334 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-machine-approver-tls\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.371638 master-0 kubenswrapper[7926]: I0216 20:57:55.371417 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-config\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.371638 master-0 kubenswrapper[7926]: I0216 20:57:55.371438 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qqkf\" (UniqueName: \"kubernetes.io/projected/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-kube-api-access-4qqkf\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.371638 master-0 kubenswrapper[7926]: I0216 20:57:55.371469 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-auth-proxy-config\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.372221 master-0 kubenswrapper[7926]: I0216 20:57:55.372192 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-auth-proxy-config\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.373705 master-0 kubenswrapper[7926]: I0216 20:57:55.373630 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-config\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.376711 master-0 kubenswrapper[7926]: I0216 20:57:55.376638 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-machine-approver-tls\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.382524 master-0 kubenswrapper[7926]: I0216 20:57:55.382425 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" podStartSLOduration=1.954419025 podStartE2EDuration="4.382398608s" podCreationTimestamp="2026-02-16 20:57:51 +0000 UTC" firstStartedPulling="2026-02-16 20:57:52.337873126 +0000 UTC m=+43.972773426" lastFinishedPulling="2026-02-16 20:57:54.765852719 +0000 UTC m=+46.400753009" observedRunningTime="2026-02-16 20:57:55.378875698 +0000 UTC m=+47.013776048" watchObservedRunningTime="2026-02-16 20:57:55.382398608 +0000 UTC m=+47.017298938" Feb 16 20:57:55.391710 master-0 kubenswrapper[7926]: I0216 20:57:55.391615 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qqkf\" (UniqueName: \"kubernetes.io/projected/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-kube-api-access-4qqkf\") pod \"machine-approver-6c46d95f74-2nz2q\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.417023 master-0 kubenswrapper[7926]: I0216 20:57:55.416970 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2lj6"] Feb 16 20:57:55.446980 master-0 kubenswrapper[7926]: I0216 20:57:55.446903 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 20:57:55.474939 master-0 kubenswrapper[7926]: W0216 20:57:55.474896 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc62bb2b4_1469_4e0d_810f_cd6e21ee908a.slice/crio-8bac8203193652171deab4e559a0035a72359c725620330467c4f253c536e2dc WatchSource:0}: Error finding container 8bac8203193652171deab4e559a0035a72359c725620330467c4f253c536e2dc: Status 404 returned error can't find the container with id 8bac8203193652171deab4e559a0035a72359c725620330467c4f253c536e2dc Feb 16 20:57:55.779466 master-0 kubenswrapper[7926]: I0216 20:57:55.779374 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dhh2p"] Feb 16 20:57:55.780790 master-0 kubenswrapper[7926]: I0216 20:57:55.780754 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 20:57:55.800924 master-0 kubenswrapper[7926]: I0216 20:57:55.800859 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhh2p"] Feb 16 20:57:55.877608 master-0 kubenswrapper[7926]: I0216 20:57:55.877454 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-catalog-content\") pod \"redhat-operators-dhh2p\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 20:57:55.877608 master-0 kubenswrapper[7926]: I0216 20:57:55.877528 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-utilities\") pod \"redhat-operators-dhh2p\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 20:57:55.878492 master-0 kubenswrapper[7926]: I0216 20:57:55.877694 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j92dc\" (UniqueName: \"kubernetes.io/projected/9566b108-44e1-4d9e-8984-4c396dc4408c-kube-api-access-j92dc\") pod \"redhat-operators-dhh2p\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 20:57:55.916389 master-0 kubenswrapper[7926]: I0216 20:57:55.916280 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 20:57:55.917213 master-0 kubenswrapper[7926]: I0216 20:57:55.917172 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:57:55.944619 master-0 kubenswrapper[7926]: I0216 20:57:55.944528 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 20:57:55.978663 master-0 kubenswrapper[7926]: I0216 20:57:55.978593 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:57:55.978919 master-0 kubenswrapper[7926]: I0216 20:57:55.978716 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-catalog-content\") pod \"redhat-operators-dhh2p\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 20:57:55.978919 master-0 kubenswrapper[7926]: I0216 20:57:55.978750 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-utilities\") pod \"redhat-operators-dhh2p\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 20:57:55.978919 master-0 kubenswrapper[7926]: I0216 20:57:55.978776 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-var-lock\") pod \"installer-3-master-0\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:57:55.978919 master-0 kubenswrapper[7926]: I0216 20:57:55.978813 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a29a1022-5f54-49a2-99f6-d19eb2773890-kube-api-access\") pod \"installer-3-master-0\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:57:55.978919 master-0 kubenswrapper[7926]: I0216 20:57:55.978891 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j92dc\" (UniqueName: \"kubernetes.io/projected/9566b108-44e1-4d9e-8984-4c396dc4408c-kube-api-access-j92dc\") pod \"redhat-operators-dhh2p\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 20:57:55.980661 master-0 kubenswrapper[7926]: I0216 20:57:55.980567 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-utilities\") pod \"redhat-operators-dhh2p\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 20:57:55.980661 master-0 kubenswrapper[7926]: I0216 20:57:55.980618 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-catalog-content\") pod \"redhat-operators-dhh2p\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 20:57:55.998928 master-0 kubenswrapper[7926]: I0216 20:57:55.998853 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j92dc\" (UniqueName: \"kubernetes.io/projected/9566b108-44e1-4d9e-8984-4c396dc4408c-kube-api-access-j92dc\") pod \"redhat-operators-dhh2p\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 20:57:56.079980 master-0 kubenswrapper[7926]: I0216 20:57:56.079875 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:57:56.079980 master-0 kubenswrapper[7926]: I0216 20:57:56.079979 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-var-lock\") pod \"installer-3-master-0\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:57:56.080521 master-0 kubenswrapper[7926]: I0216 20:57:56.080124 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:57:56.080521 master-0 kubenswrapper[7926]: I0216 20:57:56.080269 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-var-lock\") pod \"installer-3-master-0\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:57:56.080521 master-0 kubenswrapper[7926]: I0216 20:57:56.080286 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a29a1022-5f54-49a2-99f6-d19eb2773890-kube-api-access\") pod \"installer-3-master-0\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:57:56.095499 master-0 kubenswrapper[7926]: I0216 20:57:56.095398 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a29a1022-5f54-49a2-99f6-d19eb2773890-kube-api-access\") pod \"installer-3-master-0\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:57:56.118318 master-0 kubenswrapper[7926]: I0216 20:57:56.118252 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 20:57:56.271163 master-0 kubenswrapper[7926]: I0216 20:57:56.271094 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:57:56.373459 master-0 kubenswrapper[7926]: I0216 20:57:56.373386 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" event={"ID":"c62bb2b4-1469-4e0d-810f-cd6e21ee908a","Type":"ContainerStarted","Data":"94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5"} Feb 16 20:57:56.373459 master-0 kubenswrapper[7926]: I0216 20:57:56.373455 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" event={"ID":"c62bb2b4-1469-4e0d-810f-cd6e21ee908a","Type":"ContainerStarted","Data":"8bac8203193652171deab4e559a0035a72359c725620330467c4f253c536e2dc"} Feb 16 20:57:56.375701 master-0 kubenswrapper[7926]: I0216 20:57:56.375659 7926 generic.go:334] "Generic (PLEG): container finished" podID="d4a6dcba-776f-48ba-b824-90ed5ae3abee" containerID="46a258c72aa6c608e1111ee27c8210db148e57a72d2483481ebdbb919bab9811" exitCode=0 Feb 16 20:57:56.375801 master-0 kubenswrapper[7926]: I0216 20:57:56.375760 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2lj6" event={"ID":"d4a6dcba-776f-48ba-b824-90ed5ae3abee","Type":"ContainerDied","Data":"46a258c72aa6c608e1111ee27c8210db148e57a72d2483481ebdbb919bab9811"} Feb 16 20:57:56.375801 master-0 kubenswrapper[7926]: I0216 20:57:56.375778 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2lj6" event={"ID":"d4a6dcba-776f-48ba-b824-90ed5ae3abee","Type":"ContainerStarted","Data":"098908342bebd0f7f9ce0402f1dd0bea51ed6e67a6ee85624a16f82f857d60f9"} Feb 16 20:57:56.454613 master-0 kubenswrapper[7926]: I0216 20:57:56.454501 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf"] Feb 16 20:57:56.455855 master-0 kubenswrapper[7926]: I0216 20:57:56.455796 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.459919 master-0 kubenswrapper[7926]: I0216 20:57:56.459870 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 20:57:56.460085 master-0 kubenswrapper[7926]: I0216 20:57:56.460030 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 20:57:56.460362 master-0 kubenswrapper[7926]: I0216 20:57:56.460326 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 20:57:56.460362 master-0 kubenswrapper[7926]: I0216 20:57:56.460346 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 20:57:56.460451 master-0 kubenswrapper[7926]: I0216 20:57:56.460360 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 20:57:56.592057 master-0 kubenswrapper[7926]: I0216 20:57:56.591989 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-config\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.592057 master-0 kubenswrapper[7926]: I0216 20:57:56.592049 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4db59450-da78-4879-ada8-ca3fc49fb7a7-serving-cert\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.592057 master-0 kubenswrapper[7926]: I0216 20:57:56.592074 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67nzn\" (UniqueName: \"kubernetes.io/projected/4db59450-da78-4879-ada8-ca3fc49fb7a7-kube-api-access-67nzn\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.592547 master-0 kubenswrapper[7926]: I0216 20:57:56.592120 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-client-ca\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.677332 master-0 kubenswrapper[7926]: I0216 20:57:56.677189 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf"] Feb 16 20:57:56.693664 master-0 kubenswrapper[7926]: I0216 20:57:56.693587 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-client-ca\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.693927 master-0 kubenswrapper[7926]: I0216 20:57:56.693733 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-config\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.693927 master-0 kubenswrapper[7926]: I0216 20:57:56.693766 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4db59450-da78-4879-ada8-ca3fc49fb7a7-serving-cert\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.693927 master-0 kubenswrapper[7926]: I0216 20:57:56.693790 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67nzn\" (UniqueName: \"kubernetes.io/projected/4db59450-da78-4879-ada8-ca3fc49fb7a7-kube-api-access-67nzn\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.694849 master-0 kubenswrapper[7926]: I0216 20:57:56.694801 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-client-ca\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.695438 master-0 kubenswrapper[7926]: I0216 20:57:56.695397 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-config\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.697434 master-0 kubenswrapper[7926]: I0216 20:57:56.697398 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4db59450-da78-4879-ada8-ca3fc49fb7a7-serving-cert\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:56.935774 master-0 kubenswrapper[7926]: I0216 20:57:56.933705 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhh2p"] Feb 16 20:57:56.944671 master-0 kubenswrapper[7926]: I0216 20:57:56.943918 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 20:57:56.951781 master-0 kubenswrapper[7926]: I0216 20:57:56.950515 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67nzn\" (UniqueName: \"kubernetes.io/projected/4db59450-da78-4879-ada8-ca3fc49fb7a7-kube-api-access-67nzn\") pod \"route-controller-manager-749ccd9c56-wzsnf\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:57.095671 master-0 kubenswrapper[7926]: I0216 20:57:57.092429 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:57:57.382969 master-0 kubenswrapper[7926]: I0216 20:57:57.382892 7926 generic.go:334] "Generic (PLEG): container finished" podID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerID="e4407304a8565029141f8bd91a4f0c4e3f383f6d77ed1524d0cd3a581fa9e7f7" exitCode=0 Feb 16 20:57:57.382969 master-0 kubenswrapper[7926]: I0216 20:57:57.382981 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhh2p" event={"ID":"9566b108-44e1-4d9e-8984-4c396dc4408c","Type":"ContainerDied","Data":"e4407304a8565029141f8bd91a4f0c4e3f383f6d77ed1524d0cd3a581fa9e7f7"} Feb 16 20:57:57.384551 master-0 kubenswrapper[7926]: I0216 20:57:57.383011 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhh2p" event={"ID":"9566b108-44e1-4d9e-8984-4c396dc4408c","Type":"ContainerStarted","Data":"3e97182ddf5896a5823851bd32b3058169dbc5cfba0d9d88f02cc81a737767a7"} Feb 16 20:57:57.385504 master-0 kubenswrapper[7926]: I0216 20:57:57.385305 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"a29a1022-5f54-49a2-99f6-d19eb2773890","Type":"ContainerStarted","Data":"3827d0c6e638278acbaf186f1e2b6637d86efe902a1f4b0978ddea9297c39ba8"} Feb 16 20:57:57.385504 master-0 kubenswrapper[7926]: I0216 20:57:57.385338 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"a29a1022-5f54-49a2-99f6-d19eb2773890","Type":"ContainerStarted","Data":"8b81ed40814b01f51e71b5774683179011cff5033132cba8aaa86719247d405f"} Feb 16 20:57:57.930808 master-0 kubenswrapper[7926]: I0216 20:57:57.930736 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 16 20:57:57.931384 master-0 kubenswrapper[7926]: I0216 20:57:57.931329 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:57:57.967956 master-0 kubenswrapper[7926]: I0216 20:57:57.965488 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 20:57:57.979250 master-0 kubenswrapper[7926]: I0216 20:57:57.978785 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf"] Feb 16 20:57:57.980754 master-0 kubenswrapper[7926]: I0216 20:57:57.980737 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 16 20:57:58.012524 master-0 kubenswrapper[7926]: I0216 20:57:58.012438 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=3.012403849 podStartE2EDuration="3.012403849s" podCreationTimestamp="2026-02-16 20:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:57:58.01142631 +0000 UTC m=+49.646326620" watchObservedRunningTime="2026-02-16 20:57:58.012403849 +0000 UTC m=+49.647304149" Feb 16 20:57:58.031572 master-0 kubenswrapper[7926]: I0216 20:57:58.031426 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-var-lock\") pod \"installer-1-master-0\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:57:58.031572 master-0 kubenswrapper[7926]: I0216 20:57:58.031564 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:57:58.032798 master-0 kubenswrapper[7926]: I0216 20:57:58.032752 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:57:58.066252 master-0 kubenswrapper[7926]: I0216 20:57:58.064634 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf"] Feb 16 20:57:58.066733 master-0 kubenswrapper[7926]: I0216 20:57:58.066713 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 20:57:58.073589 master-0 kubenswrapper[7926]: I0216 20:57:58.073533 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 20:57:58.073950 master-0 kubenswrapper[7926]: I0216 20:57:58.073673 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 20:57:58.074356 master-0 kubenswrapper[7926]: I0216 20:57:58.074321 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 20:57:58.077023 master-0 kubenswrapper[7926]: I0216 20:57:58.076945 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf"] Feb 16 20:57:58.096169 master-0 kubenswrapper[7926]: I0216 20:57:58.096108 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 20:57:58.134070 master-0 kubenswrapper[7926]: I0216 20:57:58.134007 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:57:58.134070 master-0 kubenswrapper[7926]: I0216 20:57:58.134073 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-var-lock\") pod \"installer-1-master-0\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:57:58.134884 master-0 kubenswrapper[7926]: I0216 20:57:58.134096 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:57:58.134884 master-0 kubenswrapper[7926]: I0216 20:57:58.134131 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldzxc\" (UniqueName: \"kubernetes.io/projected/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-kube-api-access-ldzxc\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 20:57:58.134884 master-0 kubenswrapper[7926]: I0216 20:57:58.134164 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 20:57:58.134884 master-0 kubenswrapper[7926]: I0216 20:57:58.134202 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 20:57:58.134884 master-0 kubenswrapper[7926]: I0216 20:57:58.134300 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:57:58.134884 master-0 kubenswrapper[7926]: I0216 20:57:58.134332 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-var-lock\") pod \"installer-1-master-0\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:57:58.155249 master-0 kubenswrapper[7926]: I0216 20:57:58.155122 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:57:58.235143 master-0 kubenswrapper[7926]: I0216 20:57:58.235074 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldzxc\" (UniqueName: \"kubernetes.io/projected/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-kube-api-access-ldzxc\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 20:57:58.235143 master-0 kubenswrapper[7926]: I0216 20:57:58.235143 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 20:57:58.235587 master-0 kubenswrapper[7926]: I0216 20:57:58.235390 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 20:57:58.236543 master-0 kubenswrapper[7926]: I0216 20:57:58.236501 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 20:57:58.240208 master-0 kubenswrapper[7926]: I0216 20:57:58.240178 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 20:57:58.257982 master-0 kubenswrapper[7926]: I0216 20:57:58.257942 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldzxc\" (UniqueName: \"kubernetes.io/projected/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-kube-api-access-ldzxc\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 20:57:58.319043 master-0 kubenswrapper[7926]: I0216 20:57:58.318974 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:57:58.405524 master-0 kubenswrapper[7926]: I0216 20:57:58.405415 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 20:57:58.790539 master-0 kubenswrapper[7926]: W0216 20:57:58.790454 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4db59450_da78_4879_ada8_ca3fc49fb7a7.slice/crio-ff3056d39fbc51a0db62d052e0051f801497ab64b4c704d9bed90917e0c30ddd WatchSource:0}: Error finding container ff3056d39fbc51a0db62d052e0051f801497ab64b4c704d9bed90917e0c30ddd: Status 404 returned error can't find the container with id ff3056d39fbc51a0db62d052e0051f801497ab64b4c704d9bed90917e0c30ddd Feb 16 20:57:58.826951 master-0 kubenswrapper[7926]: I0216 20:57:58.826855 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl"] Feb 16 20:57:58.828042 master-0 kubenswrapper[7926]: I0216 20:57:58.828011 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 20:57:58.838872 master-0 kubenswrapper[7926]: I0216 20:57:58.831441 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 20:57:58.838872 master-0 kubenswrapper[7926]: I0216 20:57:58.832161 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 20:57:58.838872 master-0 kubenswrapper[7926]: I0216 20:57:58.832695 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 20:57:58.838872 master-0 kubenswrapper[7926]: I0216 20:57:58.834048 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl"] Feb 16 20:57:58.945204 master-0 kubenswrapper[7926]: I0216 20:57:58.945049 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/55095f4f-cac0-456c-9ccc-45869392408c-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-d7lfl\" (UID: \"55095f4f-cac0-456c-9ccc-45869392408c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 20:57:58.945204 master-0 kubenswrapper[7926]: I0216 20:57:58.945175 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnc6\" (UniqueName: \"kubernetes.io/projected/55095f4f-cac0-456c-9ccc-45869392408c-kube-api-access-7hnc6\") pod \"cluster-samples-operator-f8cbff74c-d7lfl\" (UID: \"55095f4f-cac0-456c-9ccc-45869392408c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 20:57:59.047080 master-0 kubenswrapper[7926]: I0216 20:57:59.047017 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/55095f4f-cac0-456c-9ccc-45869392408c-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-d7lfl\" (UID: \"55095f4f-cac0-456c-9ccc-45869392408c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 20:57:59.047080 master-0 kubenswrapper[7926]: I0216 20:57:59.047094 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnc6\" (UniqueName: \"kubernetes.io/projected/55095f4f-cac0-456c-9ccc-45869392408c-kube-api-access-7hnc6\") pod \"cluster-samples-operator-f8cbff74c-d7lfl\" (UID: \"55095f4f-cac0-456c-9ccc-45869392408c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 20:57:59.051722 master-0 kubenswrapper[7926]: I0216 20:57:59.051697 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/55095f4f-cac0-456c-9ccc-45869392408c-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-d7lfl\" (UID: \"55095f4f-cac0-456c-9ccc-45869392408c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 20:57:59.079293 master-0 kubenswrapper[7926]: I0216 20:57:59.078946 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnc6\" (UniqueName: \"kubernetes.io/projected/55095f4f-cac0-456c-9ccc-45869392408c-kube-api-access-7hnc6\") pod \"cluster-samples-operator-f8cbff74c-d7lfl\" (UID: \"55095f4f-cac0-456c-9ccc-45869392408c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 20:57:59.159078 master-0 kubenswrapper[7926]: I0216 20:57:59.158998 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 20:57:59.619627 master-0 kubenswrapper[7926]: I0216 20:57:59.616418 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerStarted","Data":"ff3056d39fbc51a0db62d052e0051f801497ab64b4c704d9bed90917e0c30ddd"} Feb 16 20:57:59.642130 master-0 kubenswrapper[7926]: I0216 20:57:59.640779 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz"] Feb 16 20:57:59.642130 master-0 kubenswrapper[7926]: I0216 20:57:59.641772 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.654141 master-0 kubenswrapper[7926]: I0216 20:57:59.648955 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 20:57:59.654141 master-0 kubenswrapper[7926]: I0216 20:57:59.648979 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 20:57:59.654141 master-0 kubenswrapper[7926]: I0216 20:57:59.649361 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 20:57:59.654141 master-0 kubenswrapper[7926]: I0216 20:57:59.650714 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 20:57:59.657542 master-0 kubenswrapper[7926]: I0216 20:57:59.657463 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz"] Feb 16 20:57:59.785770 master-0 kubenswrapper[7926]: I0216 20:57:59.785680 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.785770 master-0 kubenswrapper[7926]: I0216 20:57:59.785772 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgj2q\" (UniqueName: \"kubernetes.io/projected/8b648d9e-a892-4951-b0e2-fed6b16273d4-kube-api-access-sgj2q\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.786235 master-0 kubenswrapper[7926]: I0216 20:57:59.785811 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.786235 master-0 kubenswrapper[7926]: I0216 20:57:59.785844 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.786235 master-0 kubenswrapper[7926]: I0216 20:57:59.785875 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.887551 master-0 kubenswrapper[7926]: I0216 20:57:59.887401 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgj2q\" (UniqueName: \"kubernetes.io/projected/8b648d9e-a892-4951-b0e2-fed6b16273d4-kube-api-access-sgj2q\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.887551 master-0 kubenswrapper[7926]: I0216 20:57:59.887521 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.887551 master-0 kubenswrapper[7926]: I0216 20:57:59.887556 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.887883 master-0 kubenswrapper[7926]: I0216 20:57:59.887608 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.887883 master-0 kubenswrapper[7926]: I0216 20:57:59.887698 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.890173 master-0 kubenswrapper[7926]: I0216 20:57:59.890149 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.891224 master-0 kubenswrapper[7926]: I0216 20:57:59.891160 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.895918 master-0 kubenswrapper[7926]: I0216 20:57:59.895855 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.905262 master-0 kubenswrapper[7926]: I0216 20:57:59.900215 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.917092 master-0 kubenswrapper[7926]: I0216 20:57:59.916667 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgj2q\" (UniqueName: \"kubernetes.io/projected/8b648d9e-a892-4951-b0e2-fed6b16273d4-kube-api-access-sgj2q\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:57:59.977773 master-0 kubenswrapper[7926]: I0216 20:57:59.976331 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 20:58:00.351611 master-0 kubenswrapper[7926]: I0216 20:58:00.351551 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 16 20:58:00.453041 master-0 kubenswrapper[7926]: W0216 20:58:00.439636 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03a5021d_8a5c_4011_a9f9_c5eb38d5f236.slice/crio-cc46ef0ea78121e3debb45555162f099169024a83053e72fed30ccbe4c22554d WatchSource:0}: Error finding container cc46ef0ea78121e3debb45555162f099169024a83053e72fed30ccbe4c22554d: Status 404 returned error can't find the container with id cc46ef0ea78121e3debb45555162f099169024a83053e72fed30ccbe4c22554d Feb 16 20:58:00.459727 master-0 kubenswrapper[7926]: I0216 20:58:00.454282 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf"] Feb 16 20:58:00.469421 master-0 kubenswrapper[7926]: I0216 20:58:00.468789 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl"] Feb 16 20:58:00.522153 master-0 kubenswrapper[7926]: I0216 20:58:00.519634 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd"] Feb 16 20:58:00.535206 master-0 kubenswrapper[7926]: I0216 20:58:00.534783 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 20:58:00.547280 master-0 kubenswrapper[7926]: I0216 20:58:00.546097 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd"] Feb 16 20:58:00.558971 master-0 kubenswrapper[7926]: I0216 20:58:00.558885 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 20:58:00.559497 master-0 kubenswrapper[7926]: I0216 20:58:00.559406 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 20:58:00.600510 master-0 kubenswrapper[7926]: I0216 20:58:00.599662 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz"] Feb 16 20:58:00.668245 master-0 kubenswrapper[7926]: I0216 20:58:00.668002 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" event={"ID":"57b94ed4-8f0b-4223-bdaf-4316859d8ad3","Type":"ContainerStarted","Data":"03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3"} Feb 16 20:58:00.668245 master-0 kubenswrapper[7926]: I0216 20:58:00.668082 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:58:00.680883 master-0 kubenswrapper[7926]: I0216 20:58:00.677919 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965","Type":"ContainerStarted","Data":"b0c2e1a17593c2d9cad62fca4b76d1bcb53b42211c4063cb3d0e8c42005672a2"} Feb 16 20:58:00.681638 master-0 kubenswrapper[7926]: I0216 20:58:00.681599 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" event={"ID":"c62bb2b4-1469-4e0d-810f-cd6e21ee908a","Type":"ContainerStarted","Data":"f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4"} Feb 16 20:58:00.681733 master-0 kubenswrapper[7926]: I0216 20:58:00.681682 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 20:58:00.686212 master-0 kubenswrapper[7926]: I0216 20:58:00.684952 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" event={"ID":"55095f4f-cac0-456c-9ccc-45869392408c","Type":"ContainerStarted","Data":"846c42631e11b31d77d6f927ca22e80b7cd7d920231f1d2b9f1cfa12101d157e"} Feb 16 20:58:00.714373 master-0 kubenswrapper[7926]: I0216 20:58:00.713775 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7d0416-5f50-42bd-826b-92eecf9adcec-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 20:58:00.714373 master-0 kubenswrapper[7926]: I0216 20:58:00.713848 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d7d0416-5f50-42bd-826b-92eecf9adcec-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 20:58:00.714373 master-0 kubenswrapper[7926]: I0216 20:58:00.713876 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25mkq\" (UniqueName: \"kubernetes.io/projected/1d7d0416-5f50-42bd-826b-92eecf9adcec-kube-api-access-25mkq\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 20:58:00.714988 master-0 kubenswrapper[7926]: I0216 20:58:00.714943 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" event={"ID":"03a5021d-8a5c-4011-a9f9-c5eb38d5f236","Type":"ContainerStarted","Data":"82cd9aa58410168c822720c80bd115f16de52bc6d9131fe728eb5bdd7b5e78b0"} Feb 16 20:58:00.715093 master-0 kubenswrapper[7926]: I0216 20:58:00.714996 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" event={"ID":"03a5021d-8a5c-4011-a9f9-c5eb38d5f236","Type":"ContainerStarted","Data":"cc46ef0ea78121e3debb45555162f099169024a83053e72fed30ccbe4c22554d"} Feb 16 20:58:00.799840 master-0 kubenswrapper[7926]: I0216 20:58:00.799708 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" podStartSLOduration=4.683269552 podStartE2EDuration="9.799681985s" podCreationTimestamp="2026-02-16 20:57:51 +0000 UTC" firstStartedPulling="2026-02-16 20:57:54.730387195 +0000 UTC m=+46.365287536" lastFinishedPulling="2026-02-16 20:57:59.846799669 +0000 UTC m=+51.481699969" observedRunningTime="2026-02-16 20:58:00.705031669 +0000 UTC m=+52.339931969" watchObservedRunningTime="2026-02-16 20:58:00.799681985 +0000 UTC m=+52.434582295" Feb 16 20:58:00.816461 master-0 kubenswrapper[7926]: I0216 20:58:00.816365 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d7d0416-5f50-42bd-826b-92eecf9adcec-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 20:58:00.816461 master-0 kubenswrapper[7926]: I0216 20:58:00.816421 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25mkq\" (UniqueName: \"kubernetes.io/projected/1d7d0416-5f50-42bd-826b-92eecf9adcec-kube-api-access-25mkq\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 20:58:00.816810 master-0 kubenswrapper[7926]: I0216 20:58:00.816490 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7d0416-5f50-42bd-826b-92eecf9adcec-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 20:58:00.818988 master-0 kubenswrapper[7926]: I0216 20:58:00.818962 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d7d0416-5f50-42bd-826b-92eecf9adcec-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 20:58:00.847152 master-0 kubenswrapper[7926]: I0216 20:58:00.840426 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7d0416-5f50-42bd-826b-92eecf9adcec-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 20:58:00.883793 master-0 kubenswrapper[7926]: I0216 20:58:00.881906 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" podStartSLOduration=1.727054527 podStartE2EDuration="5.881866305s" podCreationTimestamp="2026-02-16 20:57:55 +0000 UTC" firstStartedPulling="2026-02-16 20:57:55.690947031 +0000 UTC m=+47.325847331" lastFinishedPulling="2026-02-16 20:57:59.845758809 +0000 UTC m=+51.480659109" observedRunningTime="2026-02-16 20:58:00.876724978 +0000 UTC m=+52.511625288" watchObservedRunningTime="2026-02-16 20:58:00.881866305 +0000 UTC m=+52.516766605" Feb 16 20:58:00.921757 master-0 kubenswrapper[7926]: I0216 20:58:00.916392 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25mkq\" (UniqueName: \"kubernetes.io/projected/1d7d0416-5f50-42bd-826b-92eecf9adcec-kube-api-access-25mkq\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 20:58:01.160332 master-0 kubenswrapper[7926]: I0216 20:58:01.160256 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-h8f7q"] Feb 16 20:58:01.162179 master-0 kubenswrapper[7926]: I0216 20:58:01.162130 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.169281 master-0 kubenswrapper[7926]: I0216 20:58:01.169213 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-h8f7q"] Feb 16 20:58:01.169386 master-0 kubenswrapper[7926]: I0216 20:58:01.169325 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 20:58:01.170119 master-0 kubenswrapper[7926]: I0216 20:58:01.170087 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 20:58:01.170424 master-0 kubenswrapper[7926]: I0216 20:58:01.170397 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 20:58:01.170539 master-0 kubenswrapper[7926]: I0216 20:58:01.170496 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 20:58:01.182630 master-0 kubenswrapper[7926]: I0216 20:58:01.182403 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 20:58:01.222325 master-0 kubenswrapper[7926]: I0216 20:58:01.222225 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 20:58:01.246069 master-0 kubenswrapper[7926]: I0216 20:58:01.246028 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m"] Feb 16 20:58:01.249105 master-0 kubenswrapper[7926]: I0216 20:58:01.249081 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 20:58:01.252329 master-0 kubenswrapper[7926]: I0216 20:58:01.252215 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 20:58:01.258290 master-0 kubenswrapper[7926]: I0216 20:58:01.256328 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s"] Feb 16 20:58:01.258290 master-0 kubenswrapper[7926]: I0216 20:58:01.257161 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.260787 master-0 kubenswrapper[7926]: I0216 20:58:01.260306 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 20:58:01.260787 master-0 kubenswrapper[7926]: I0216 20:58:01.260428 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 20:58:01.260787 master-0 kubenswrapper[7926]: I0216 20:58:01.260564 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 20:58:01.260956 master-0 kubenswrapper[7926]: I0216 20:58:01.260942 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 20:58:01.262745 master-0 kubenswrapper[7926]: I0216 20:58:01.262731 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 20:58:01.273970 master-0 kubenswrapper[7926]: I0216 20:58:01.273552 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m"] Feb 16 20:58:01.278411 master-0 kubenswrapper[7926]: I0216 20:58:01.278262 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s"] Feb 16 20:58:01.326329 master-0 kubenswrapper[7926]: I0216 20:58:01.326293 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npfk7\" (UniqueName: \"kubernetes.io/projected/e9615af2-cad5-4705-9c2f-6f3c97026100-kube-api-access-npfk7\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.326563 master-0 kubenswrapper[7926]: I0216 20:58:01.326516 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.326629 master-0 kubenswrapper[7926]: I0216 20:58:01.326595 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9615af2-cad5-4705-9c2f-6f3c97026100-serving-cert\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.327950 master-0 kubenswrapper[7926]: I0216 20:58:01.326745 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx8bf\" (UniqueName: \"kubernetes.io/projected/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-kube-api-access-wx8bf\") pod \"cluster-storage-operator-75b869db96-g4w5m\" (UID: \"aa2e9bbc-3962-45f5-a7cc-2dc059409e70\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 20:58:01.327950 master-0 kubenswrapper[7926]: I0216 20:58:01.326801 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-g4w5m\" (UID: \"aa2e9bbc-3962-45f5-a7cc-2dc059409e70\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 20:58:01.327950 master-0 kubenswrapper[7926]: I0216 20:58:01.326833 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/e9615af2-cad5-4705-9c2f-6f3c97026100-snapshots\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.327950 master-0 kubenswrapper[7926]: I0216 20:58:01.326891 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.427907 master-0 kubenswrapper[7926]: I0216 20:58:01.427852 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.428087 master-0 kubenswrapper[7926]: I0216 20:58:01.427914 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff193060-a272-4e4e-990a-83ac410f523d-proxy-tls\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.428087 master-0 kubenswrapper[7926]: I0216 20:58:01.427947 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npfk7\" (UniqueName: \"kubernetes.io/projected/e9615af2-cad5-4705-9c2f-6f3c97026100-kube-api-access-npfk7\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.428087 master-0 kubenswrapper[7926]: I0216 20:58:01.427967 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.428087 master-0 kubenswrapper[7926]: I0216 20:58:01.427991 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9615af2-cad5-4705-9c2f-6f3c97026100-serving-cert\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.428087 master-0 kubenswrapper[7926]: I0216 20:58:01.428033 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmhq9\" (UniqueName: \"kubernetes.io/projected/ff193060-a272-4e4e-990a-83ac410f523d-kube-api-access-wmhq9\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.428087 master-0 kubenswrapper[7926]: I0216 20:58:01.428056 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-images\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.428087 master-0 kubenswrapper[7926]: I0216 20:58:01.428076 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx8bf\" (UniqueName: \"kubernetes.io/projected/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-kube-api-access-wx8bf\") pod \"cluster-storage-operator-75b869db96-g4w5m\" (UID: \"aa2e9bbc-3962-45f5-a7cc-2dc059409e70\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 20:58:01.428361 master-0 kubenswrapper[7926]: I0216 20:58:01.428100 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-g4w5m\" (UID: \"aa2e9bbc-3962-45f5-a7cc-2dc059409e70\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 20:58:01.428361 master-0 kubenswrapper[7926]: I0216 20:58:01.428121 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/e9615af2-cad5-4705-9c2f-6f3c97026100-snapshots\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.428361 master-0 kubenswrapper[7926]: I0216 20:58:01.428141 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-auth-proxy-config\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.429146 master-0 kubenswrapper[7926]: I0216 20:58:01.429122 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.430775 master-0 kubenswrapper[7926]: I0216 20:58:01.430709 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.433522 master-0 kubenswrapper[7926]: I0216 20:58:01.433473 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/e9615af2-cad5-4705-9c2f-6f3c97026100-snapshots\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.457427 master-0 kubenswrapper[7926]: I0216 20:58:01.457368 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-g4w5m\" (UID: \"aa2e9bbc-3962-45f5-a7cc-2dc059409e70\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 20:58:01.460846 master-0 kubenswrapper[7926]: I0216 20:58:01.458121 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9615af2-cad5-4705-9c2f-6f3c97026100-serving-cert\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.467010 master-0 kubenswrapper[7926]: I0216 20:58:01.466711 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npfk7\" (UniqueName: \"kubernetes.io/projected/e9615af2-cad5-4705-9c2f-6f3c97026100-kube-api-access-npfk7\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.480157 master-0 kubenswrapper[7926]: I0216 20:58:01.478317 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx8bf\" (UniqueName: \"kubernetes.io/projected/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-kube-api-access-wx8bf\") pod \"cluster-storage-operator-75b869db96-g4w5m\" (UID: \"aa2e9bbc-3962-45f5-a7cc-2dc059409e70\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 20:58:01.531416 master-0 kubenswrapper[7926]: I0216 20:58:01.529218 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-images\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.531416 master-0 kubenswrapper[7926]: I0216 20:58:01.529281 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-auth-proxy-config\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.531416 master-0 kubenswrapper[7926]: I0216 20:58:01.529312 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff193060-a272-4e4e-990a-83ac410f523d-proxy-tls\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.531416 master-0 kubenswrapper[7926]: I0216 20:58:01.529354 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmhq9\" (UniqueName: \"kubernetes.io/projected/ff193060-a272-4e4e-990a-83ac410f523d-kube-api-access-wmhq9\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.531416 master-0 kubenswrapper[7926]: I0216 20:58:01.530610 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-images\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.531416 master-0 kubenswrapper[7926]: I0216 20:58:01.531202 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-auth-proxy-config\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.551326 master-0 kubenswrapper[7926]: I0216 20:58:01.551159 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff193060-a272-4e4e-990a-83ac410f523d-proxy-tls\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.553609 master-0 kubenswrapper[7926]: I0216 20:58:01.553339 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmhq9\" (UniqueName: \"kubernetes.io/projected/ff193060-a272-4e4e-990a-83ac410f523d-kube-api-access-wmhq9\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.567738 master-0 kubenswrapper[7926]: I0216 20:58:01.567356 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 20:58:01.647260 master-0 kubenswrapper[7926]: I0216 20:58:01.647124 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 20:58:01.668899 master-0 kubenswrapper[7926]: I0216 20:58:01.668762 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 20:58:01.739581 master-0 kubenswrapper[7926]: I0216 20:58:01.738829 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965","Type":"ContainerStarted","Data":"5f4f1f7bf4711de84107b1c6040a91b2b71847aa5f151a70149a5a43fdbb16fc"} Feb 16 20:58:01.749757 master-0 kubenswrapper[7926]: I0216 20:58:01.748745 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb/installer/0.log" Feb 16 20:58:01.749757 master-0 kubenswrapper[7926]: I0216 20:58:01.748779 7926 generic.go:334] "Generic (PLEG): container finished" podID="be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb" containerID="4d5f546c2421eec3805ff12860007eff73909bb7626878d72e7e0b55753734ca" exitCode=1 Feb 16 20:58:01.749757 master-0 kubenswrapper[7926]: I0216 20:58:01.748842 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb","Type":"ContainerDied","Data":"4d5f546c2421eec3805ff12860007eff73909bb7626878d72e7e0b55753734ca"} Feb 16 20:58:01.751829 master-0 kubenswrapper[7926]: I0216 20:58:01.751797 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"74e6be5033443384ea4bd5754c8e506826ab77e1e025ae4e7b5a3735350d70f2"} Feb 16 20:58:01.761827 master-0 kubenswrapper[7926]: I0216 20:58:01.760225 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=4.760211019 podStartE2EDuration="4.760211019s" podCreationTimestamp="2026-02-16 20:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 20:58:01.757735469 +0000 UTC m=+53.392635769" watchObservedRunningTime="2026-02-16 20:58:01.760211019 +0000 UTC m=+53.395111319" Feb 16 20:58:01.904830 master-0 kubenswrapper[7926]: I0216 20:58:01.904082 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd"] Feb 16 20:58:01.957319 master-0 kubenswrapper[7926]: W0216 20:58:01.956937 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d7d0416_5f50_42bd_826b_92eecf9adcec.slice/crio-1ff8802ad134d499fee700156b80ec71b617c31ecfda4162eeae2f5521b198f8 WatchSource:0}: Error finding container 1ff8802ad134d499fee700156b80ec71b617c31ecfda4162eeae2f5521b198f8: Status 404 returned error can't find the container with id 1ff8802ad134d499fee700156b80ec71b617c31ecfda4162eeae2f5521b198f8 Feb 16 20:58:02.278359 master-0 kubenswrapper[7926]: I0216 20:58:02.278316 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb/installer/0.log" Feb 16 20:58:02.278571 master-0 kubenswrapper[7926]: I0216 20:58:02.278398 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:58:02.453123 master-0 kubenswrapper[7926]: I0216 20:58:02.453053 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-var-lock\") pod \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " Feb 16 20:58:02.453360 master-0 kubenswrapper[7926]: I0216 20:58:02.453256 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kube-api-access\") pod \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " Feb 16 20:58:02.453360 master-0 kubenswrapper[7926]: I0216 20:58:02.453355 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kubelet-dir\") pod \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\" (UID: \"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb\") " Feb 16 20:58:02.453460 master-0 kubenswrapper[7926]: I0216 20:58:02.453236 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-var-lock" (OuterVolumeSpecName: "var-lock") pod "be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb" (UID: "be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:58:02.453586 master-0 kubenswrapper[7926]: I0216 20:58:02.453534 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb" (UID: "be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:58:02.454592 master-0 kubenswrapper[7926]: I0216 20:58:02.454200 7926 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:02.454690 master-0 kubenswrapper[7926]: I0216 20:58:02.454599 7926 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:02.457532 master-0 kubenswrapper[7926]: I0216 20:58:02.457485 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb" (UID: "be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:58:02.564408 master-0 kubenswrapper[7926]: I0216 20:58:02.564249 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:02.766747 master-0 kubenswrapper[7926]: I0216 20:58:02.766683 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb/installer/0.log" Feb 16 20:58:02.767818 master-0 kubenswrapper[7926]: I0216 20:58:02.767107 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 20:58:02.775903 master-0 kubenswrapper[7926]: I0216 20:58:02.775838 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" event={"ID":"1d7d0416-5f50-42bd-826b-92eecf9adcec","Type":"ContainerStarted","Data":"292b6b8cf180e68ad44412d08d309be8106bcaf05b10681c44231906c9b5f8fa"} Feb 16 20:58:02.775903 master-0 kubenswrapper[7926]: I0216 20:58:02.775898 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" event={"ID":"1d7d0416-5f50-42bd-826b-92eecf9adcec","Type":"ContainerStarted","Data":"1ff8802ad134d499fee700156b80ec71b617c31ecfda4162eeae2f5521b198f8"} Feb 16 20:58:02.775903 master-0 kubenswrapper[7926]: I0216 20:58:02.775913 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb","Type":"ContainerDied","Data":"3ba1bee73a0e81eaff571d4e985ca295a9b1e963b6ed0e932ac596130ee5ae9e"} Feb 16 20:58:02.776165 master-0 kubenswrapper[7926]: I0216 20:58:02.775950 7926 scope.go:117] "RemoveContainer" containerID="4d5f546c2421eec3805ff12860007eff73909bb7626878d72e7e0b55753734ca" Feb 16 20:58:03.170546 master-0 kubenswrapper[7926]: I0216 20:58:03.170335 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xv645"] Feb 16 20:58:03.303398 master-0 kubenswrapper[7926]: I0216 20:58:03.303084 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-h8f7q"] Feb 16 20:58:03.308960 master-0 kubenswrapper[7926]: I0216 20:58:03.308883 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s"] Feb 16 20:58:03.345470 master-0 kubenswrapper[7926]: I0216 20:58:03.345397 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m"] Feb 16 20:58:03.361314 master-0 kubenswrapper[7926]: I0216 20:58:03.361166 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j5kwc"] Feb 16 20:58:03.369333 master-0 kubenswrapper[7926]: E0216 20:58:03.368212 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb" containerName="installer" Feb 16 20:58:03.369333 master-0 kubenswrapper[7926]: I0216 20:58:03.368258 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb" containerName="installer" Feb 16 20:58:03.369333 master-0 kubenswrapper[7926]: I0216 20:58:03.368452 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb" containerName="installer" Feb 16 20:58:03.371234 master-0 kubenswrapper[7926]: I0216 20:58:03.370174 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:58:03.372967 master-0 kubenswrapper[7926]: I0216 20:58:03.372904 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-f8prb" Feb 16 20:58:03.373263 master-0 kubenswrapper[7926]: I0216 20:58:03.373235 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 20:58:03.381970 master-0 kubenswrapper[7926]: I0216 20:58:03.381473 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5kwc"] Feb 16 20:58:03.490762 master-0 kubenswrapper[7926]: I0216 20:58:03.469005 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 20:58:03.490762 master-0 kubenswrapper[7926]: I0216 20:58:03.473832 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="a29a1022-5f54-49a2-99f6-d19eb2773890" containerName="installer" containerID="cri-o://3827d0c6e638278acbaf186f1e2b6637d86efe902a1f4b0978ddea9297c39ba8" gracePeriod=30 Feb 16 20:58:03.490762 master-0 kubenswrapper[7926]: I0216 20:58:03.483394 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 20:58:03.490762 master-0 kubenswrapper[7926]: I0216 20:58:03.489102 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-utilities\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:58:03.490762 master-0 kubenswrapper[7926]: I0216 20:58:03.489245 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-catalog-content\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:58:03.490762 master-0 kubenswrapper[7926]: I0216 20:58:03.489277 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcwzq\" (UniqueName: \"kubernetes.io/projected/ce229d27-837d-4a98-80fc-d56877ae39b8-kube-api-access-dcwzq\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:58:03.490762 master-0 kubenswrapper[7926]: I0216 20:58:03.489749 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl"] Feb 16 20:58:03.498714 master-0 kubenswrapper[7926]: I0216 20:58:03.492129 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.515062 master-0 kubenswrapper[7926]: I0216 20:58:03.505408 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-jswsr" Feb 16 20:58:03.515062 master-0 kubenswrapper[7926]: I0216 20:58:03.507018 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 20:58:03.515062 master-0 kubenswrapper[7926]: I0216 20:58:03.508154 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 20:58:03.515062 master-0 kubenswrapper[7926]: I0216 20:58:03.508337 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 20:58:03.515062 master-0 kubenswrapper[7926]: I0216 20:58:03.508508 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 20:58:03.515062 master-0 kubenswrapper[7926]: I0216 20:58:03.508723 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 20:58:03.515687 master-0 kubenswrapper[7926]: I0216 20:58:03.515208 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4"] Feb 16 20:58:03.526769 master-0 kubenswrapper[7926]: I0216 20:58:03.515948 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.526769 master-0 kubenswrapper[7926]: I0216 20:58:03.518248 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-6xcjr" Feb 16 20:58:03.536751 master-0 kubenswrapper[7926]: I0216 20:58:03.532460 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 20:58:03.550808 master-0 kubenswrapper[7926]: I0216 20:58:03.544732 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4"] Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.591228 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcwzq\" (UniqueName: \"kubernetes.io/projected/ce229d27-837d-4a98-80fc-d56877ae39b8-kube-api-access-dcwzq\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.591299 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtrzq\" (UniqueName: \"kubernetes.io/projected/319dc882-e1f5-40f9-99f4-2bae028337e5-kube-api-access-mtrzq\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.592255 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.592340 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-apiservice-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.592373 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-utilities\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.592395 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9hg4\" (UniqueName: \"kubernetes.io/projected/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-kube-api-access-w9hg4\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.592941 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-utilities\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.593055 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/319dc882-e1f5-40f9-99f4-2bae028337e5-tmpfs\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.593140 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.593202 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.593225 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.593272 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-webhook-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.593314 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-catalog-content\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:58:03.605731 master-0 kubenswrapper[7926]: I0216 20:58:03.593628 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-catalog-content\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:58:03.691715 master-0 kubenswrapper[7926]: I0216 20:58:03.676897 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcwzq\" (UniqueName: \"kubernetes.io/projected/ce229d27-837d-4a98-80fc-d56877ae39b8-kube-api-access-dcwzq\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.692146 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.694368 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-apiservice-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.694437 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9hg4\" (UniqueName: \"kubernetes.io/projected/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-kube-api-access-w9hg4\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.694474 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/319dc882-e1f5-40f9-99f4-2bae028337e5-tmpfs\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.694504 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.694534 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.694550 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.694575 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-webhook-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.694608 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtrzq\" (UniqueName: \"kubernetes.io/projected/319dc882-e1f5-40f9-99f4-2bae028337e5-kube-api-access-mtrzq\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.694627 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.695385 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.696510 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.696596 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.705715 master-0 kubenswrapper[7926]: I0216 20:58:03.701916 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/319dc882-e1f5-40f9-99f4-2bae028337e5-tmpfs\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.708697 master-0 kubenswrapper[7926]: I0216 20:58:03.707242 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.728715 master-0 kubenswrapper[7926]: I0216 20:58:03.717538 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-apiservice-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.728715 master-0 kubenswrapper[7926]: I0216 20:58:03.724847 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-webhook-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.765727 master-0 kubenswrapper[7926]: I0216 20:58:03.752784 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtrzq\" (UniqueName: \"kubernetes.io/projected/319dc882-e1f5-40f9-99f4-2bae028337e5-kube-api-access-mtrzq\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.765727 master-0 kubenswrapper[7926]: I0216 20:58:03.760789 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9hg4\" (UniqueName: \"kubernetes.io/projected/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-kube-api-access-w9hg4\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.799980 master-0 kubenswrapper[7926]: I0216 20:58:03.796332 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2lj6"] Feb 16 20:58:03.826816 master-0 kubenswrapper[7926]: I0216 20:58:03.816492 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_a29a1022-5f54-49a2-99f6-d19eb2773890/installer/0.log" Feb 16 20:58:03.826816 master-0 kubenswrapper[7926]: I0216 20:58:03.816587 7926 generic.go:334] "Generic (PLEG): container finished" podID="a29a1022-5f54-49a2-99f6-d19eb2773890" containerID="3827d0c6e638278acbaf186f1e2b6637d86efe902a1f4b0978ddea9297c39ba8" exitCode=1 Feb 16 20:58:03.826816 master-0 kubenswrapper[7926]: I0216 20:58:03.816739 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"a29a1022-5f54-49a2-99f6-d19eb2773890","Type":"ContainerDied","Data":"3827d0c6e638278acbaf186f1e2b6637d86efe902a1f4b0978ddea9297c39ba8"} Feb 16 20:58:03.901211 master-0 kubenswrapper[7926]: I0216 20:58:03.900857 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 20:58:03.961685 master-0 kubenswrapper[7926]: I0216 20:58:03.959725 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:58:03.995555 master-0 kubenswrapper[7926]: I0216 20:58:03.994327 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb"] Feb 16 20:58:03.995555 master-0 kubenswrapper[7926]: I0216 20:58:03.995163 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.005513 master-0 kubenswrapper[7926]: I0216 20:58:04.005135 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 20:58:04.005513 master-0 kubenswrapper[7926]: I0216 20:58:04.005420 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 20:58:04.005757 master-0 kubenswrapper[7926]: I0216 20:58:04.005608 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-xg8bz" Feb 16 20:58:04.005808 master-0 kubenswrapper[7926]: I0216 20:58:04.005756 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 20:58:04.033102 master-0 kubenswrapper[7926]: I0216 20:58:04.029346 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb"] Feb 16 20:58:04.107208 master-0 kubenswrapper[7926]: I0216 20:58:04.107039 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-images\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.107208 master-0 kubenswrapper[7926]: I0216 20:58:04.107106 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-config\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.107208 master-0 kubenswrapper[7926]: I0216 20:58:04.107148 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf2w4\" (UniqueName: \"kubernetes.io/projected/ba294358-051a-4f09-b182-710d3d6778c5-kube-api-access-qf2w4\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.107208 master-0 kubenswrapper[7926]: I0216 20:58:04.107194 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba294358-051a-4f09-b182-710d3d6778c5-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.189079 master-0 kubenswrapper[7926]: I0216 20:58:04.187072 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sn2nh"] Feb 16 20:58:04.189079 master-0 kubenswrapper[7926]: I0216 20:58:04.188046 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:58:04.191358 master-0 kubenswrapper[7926]: I0216 20:58:04.190701 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x77sl" Feb 16 20:58:04.200213 master-0 kubenswrapper[7926]: I0216 20:58:04.200159 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sn2nh"] Feb 16 20:58:04.208438 master-0 kubenswrapper[7926]: I0216 20:58:04.207927 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-images\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.208438 master-0 kubenswrapper[7926]: I0216 20:58:04.207977 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-config\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.208438 master-0 kubenswrapper[7926]: I0216 20:58:04.208015 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf2w4\" (UniqueName: \"kubernetes.io/projected/ba294358-051a-4f09-b182-710d3d6778c5-kube-api-access-qf2w4\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.208438 master-0 kubenswrapper[7926]: I0216 20:58:04.208060 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba294358-051a-4f09-b182-710d3d6778c5-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.209480 master-0 kubenswrapper[7926]: I0216 20:58:04.209437 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-images\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.213636 master-0 kubenswrapper[7926]: I0216 20:58:04.209900 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-config\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.215361 master-0 kubenswrapper[7926]: I0216 20:58:04.214516 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba294358-051a-4f09-b182-710d3d6778c5-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.235786 master-0 kubenswrapper[7926]: I0216 20:58:04.228985 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf2w4\" (UniqueName: \"kubernetes.io/projected/ba294358-051a-4f09-b182-710d3d6778c5-kube-api-access-qf2w4\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.315574 master-0 kubenswrapper[7926]: I0216 20:58:04.311758 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-utilities\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:58:04.315574 master-0 kubenswrapper[7926]: I0216 20:58:04.311828 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-catalog-content\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:58:04.315574 master-0 kubenswrapper[7926]: I0216 20:58:04.311955 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmn29\" (UniqueName: \"kubernetes.io/projected/f275e79f-923c-4d3a-8ed4-084a122ddcf4-kube-api-access-cmn29\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:58:04.347216 master-0 kubenswrapper[7926]: I0216 20:58:04.347156 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:58:04.413599 master-0 kubenswrapper[7926]: I0216 20:58:04.413537 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-catalog-content\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:58:04.413599 master-0 kubenswrapper[7926]: I0216 20:58:04.413589 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmn29\" (UniqueName: \"kubernetes.io/projected/f275e79f-923c-4d3a-8ed4-084a122ddcf4-kube-api-access-cmn29\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:58:04.413870 master-0 kubenswrapper[7926]: I0216 20:58:04.413681 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-utilities\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:58:04.414261 master-0 kubenswrapper[7926]: I0216 20:58:04.414232 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-utilities\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:58:04.414502 master-0 kubenswrapper[7926]: I0216 20:58:04.414477 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-catalog-content\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:58:04.432130 master-0 kubenswrapper[7926]: I0216 20:58:04.432074 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmn29\" (UniqueName: \"kubernetes.io/projected/f275e79f-923c-4d3a-8ed4-084a122ddcf4-kube-api-access-cmn29\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:58:04.525600 master-0 kubenswrapper[7926]: I0216 20:58:04.525496 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:58:04.772528 master-0 kubenswrapper[7926]: I0216 20:58:04.772397 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb" path="/var/lib/kubelet/pods/be2039f9-f8ac-4046-a3c9-ad92fd7fa4cb/volumes" Feb 16 20:58:04.911574 master-0 kubenswrapper[7926]: I0216 20:58:04.911520 7926 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 20:58:04.914829 master-0 kubenswrapper[7926]: I0216 20:58:04.911791 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" containerID="cri-o://fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e" gracePeriod=30 Feb 16 20:58:04.914829 master-0 kubenswrapper[7926]: I0216 20:58:04.911952 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" containerID="cri-o://a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204" gracePeriod=30 Feb 16 20:58:04.917810 master-0 kubenswrapper[7926]: I0216 20:58:04.917777 7926 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 16 20:58:04.918038 master-0 kubenswrapper[7926]: E0216 20:58:04.918023 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 16 20:58:04.918038 master-0 kubenswrapper[7926]: I0216 20:58:04.918039 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 16 20:58:04.918207 master-0 kubenswrapper[7926]: E0216 20:58:04.918053 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 16 20:58:04.918207 master-0 kubenswrapper[7926]: I0216 20:58:04.918062 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 16 20:58:04.918329 master-0 kubenswrapper[7926]: I0216 20:58:04.918233 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 16 20:58:04.918329 master-0 kubenswrapper[7926]: I0216 20:58:04.918249 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 16 20:58:04.935753 master-0 kubenswrapper[7926]: I0216 20:58:04.924908 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.031062 master-0 kubenswrapper[7926]: I0216 20:58:05.030903 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.031062 master-0 kubenswrapper[7926]: I0216 20:58:05.030981 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.031062 master-0 kubenswrapper[7926]: I0216 20:58:05.031007 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.031356 master-0 kubenswrapper[7926]: I0216 20:58:05.031131 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.031356 master-0 kubenswrapper[7926]: I0216 20:58:05.031285 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.031356 master-0 kubenswrapper[7926]: I0216 20:58:05.031317 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.137913 master-0 kubenswrapper[7926]: I0216 20:58:05.137836 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.137913 master-0 kubenswrapper[7926]: I0216 20:58:05.137917 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.138237 master-0 kubenswrapper[7926]: I0216 20:58:05.137951 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.138237 master-0 kubenswrapper[7926]: I0216 20:58:05.137964 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.138237 master-0 kubenswrapper[7926]: I0216 20:58:05.137980 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.138237 master-0 kubenswrapper[7926]: I0216 20:58:05.138044 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.138237 master-0 kubenswrapper[7926]: I0216 20:58:05.138121 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.138237 master-0 kubenswrapper[7926]: I0216 20:58:05.138149 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.138237 master-0 kubenswrapper[7926]: I0216 20:58:05.138243 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.138448 master-0 kubenswrapper[7926]: I0216 20:58:05.138302 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.138448 master-0 kubenswrapper[7926]: I0216 20:58:05.138314 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:05.138448 master-0 kubenswrapper[7926]: I0216 20:58:05.138345 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"etcd-master-0\" (UID: \"401699cb53e7098157e808a83125b0e4\") " pod="openshift-etcd/etcd-master-0" Feb 16 20:58:06.831787 master-0 kubenswrapper[7926]: W0216 20:58:06.831522 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff193060_a272_4e4e_990a_83ac410f523d.slice/crio-3edd59cb6b6314e671425a245027b79b2d561376466e447c62b29ac14f08bcff WatchSource:0}: Error finding container 3edd59cb6b6314e671425a245027b79b2d561376466e447c62b29ac14f08bcff: Status 404 returned error can't find the container with id 3edd59cb6b6314e671425a245027b79b2d561376466e447c62b29ac14f08bcff Feb 16 20:58:06.845267 master-0 kubenswrapper[7926]: I0216 20:58:06.845154 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" event={"ID":"ff193060-a272-4e4e-990a-83ac410f523d","Type":"ContainerStarted","Data":"3edd59cb6b6314e671425a245027b79b2d561376466e447c62b29ac14f08bcff"} Feb 16 20:58:07.356852 master-0 kubenswrapper[7926]: W0216 20:58:07.356774 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9615af2_cad5_4705_9c2f_6f3c97026100.slice/crio-db8564acd67a0d7a69c00ddf2a89b541dc8e61594341a8f533db80c14da1c414 WatchSource:0}: Error finding container db8564acd67a0d7a69c00ddf2a89b541dc8e61594341a8f533db80c14da1c414: Status 404 returned error can't find the container with id db8564acd67a0d7a69c00ddf2a89b541dc8e61594341a8f533db80c14da1c414 Feb 16 20:58:07.363220 master-0 kubenswrapper[7926]: W0216 20:58:07.363172 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa2e9bbc_3962_45f5_a7cc_2dc059409e70.slice/crio-e1d55dfca25559f503e3ffffa2f5f036874c5ff002f21e1743ae94ece4a5c2a9 WatchSource:0}: Error finding container e1d55dfca25559f503e3ffffa2f5f036874c5ff002f21e1743ae94ece4a5c2a9: Status 404 returned error can't find the container with id e1d55dfca25559f503e3ffffa2f5f036874c5ff002f21e1743ae94ece4a5c2a9 Feb 16 20:58:07.858613 master-0 kubenswrapper[7926]: I0216 20:58:07.858461 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" event={"ID":"aa2e9bbc-3962-45f5-a7cc-2dc059409e70","Type":"ContainerStarted","Data":"e1d55dfca25559f503e3ffffa2f5f036874c5ff002f21e1743ae94ece4a5c2a9"} Feb 16 20:58:07.863303 master-0 kubenswrapper[7926]: I0216 20:58:07.863232 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" event={"ID":"e9615af2-cad5-4705-9c2f-6f3c97026100","Type":"ContainerStarted","Data":"db8564acd67a0d7a69c00ddf2a89b541dc8e61594341a8f533db80c14da1c414"} Feb 16 20:58:09.157263 master-0 kubenswrapper[7926]: I0216 20:58:09.157208 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_a29a1022-5f54-49a2-99f6-d19eb2773890/installer/0.log" Feb 16 20:58:09.157263 master-0 kubenswrapper[7926]: I0216 20:58:09.157290 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:58:09.308128 master-0 kubenswrapper[7926]: I0216 20:58:09.308070 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-kubelet-dir\") pod \"a29a1022-5f54-49a2-99f6-d19eb2773890\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " Feb 16 20:58:09.308128 master-0 kubenswrapper[7926]: I0216 20:58:09.308138 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a29a1022-5f54-49a2-99f6-d19eb2773890-kube-api-access\") pod \"a29a1022-5f54-49a2-99f6-d19eb2773890\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " Feb 16 20:58:09.308543 master-0 kubenswrapper[7926]: I0216 20:58:09.308223 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-var-lock\") pod \"a29a1022-5f54-49a2-99f6-d19eb2773890\" (UID: \"a29a1022-5f54-49a2-99f6-d19eb2773890\") " Feb 16 20:58:09.308543 master-0 kubenswrapper[7926]: I0216 20:58:09.308246 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a29a1022-5f54-49a2-99f6-d19eb2773890" (UID: "a29a1022-5f54-49a2-99f6-d19eb2773890"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:58:09.308543 master-0 kubenswrapper[7926]: I0216 20:58:09.308436 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-var-lock" (OuterVolumeSpecName: "var-lock") pod "a29a1022-5f54-49a2-99f6-d19eb2773890" (UID: "a29a1022-5f54-49a2-99f6-d19eb2773890"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:58:09.308712 master-0 kubenswrapper[7926]: I0216 20:58:09.308635 7926 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:09.308712 master-0 kubenswrapper[7926]: I0216 20:58:09.308685 7926 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a29a1022-5f54-49a2-99f6-d19eb2773890-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:09.311573 master-0 kubenswrapper[7926]: I0216 20:58:09.311522 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a29a1022-5f54-49a2-99f6-d19eb2773890-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a29a1022-5f54-49a2-99f6-d19eb2773890" (UID: "a29a1022-5f54-49a2-99f6-d19eb2773890"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:58:09.410268 master-0 kubenswrapper[7926]: I0216 20:58:09.410135 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a29a1022-5f54-49a2-99f6-d19eb2773890-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:09.874787 master-0 kubenswrapper[7926]: I0216 20:58:09.874601 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_a29a1022-5f54-49a2-99f6-d19eb2773890/installer/0.log" Feb 16 20:58:09.874787 master-0 kubenswrapper[7926]: I0216 20:58:09.874773 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 20:58:09.875048 master-0 kubenswrapper[7926]: I0216 20:58:09.874796 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"a29a1022-5f54-49a2-99f6-d19eb2773890","Type":"ContainerDied","Data":"8b81ed40814b01f51e71b5774683179011cff5033132cba8aaa86719247d405f"} Feb 16 20:58:09.875048 master-0 kubenswrapper[7926]: I0216 20:58:09.874880 7926 scope.go:117] "RemoveContainer" containerID="3827d0c6e638278acbaf186f1e2b6637d86efe902a1f4b0978ddea9297c39ba8" Feb 16 20:58:17.963826 master-0 kubenswrapper[7926]: E0216 20:58:17.963730 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 20:58:17.964533 master-0 kubenswrapper[7926]: I0216 20:58:17.964494 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 20:58:19.053953 master-0 kubenswrapper[7926]: I0216 20:58:19.053896 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247" exitCode=1 Feb 16 20:58:19.054848 master-0 kubenswrapper[7926]: I0216 20:58:19.053971 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247"} Feb 16 20:58:19.054848 master-0 kubenswrapper[7926]: I0216 20:58:19.054634 7926 scope.go:117] "RemoveContainer" containerID="fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247" Feb 16 20:58:19.843106 master-0 kubenswrapper[7926]: E0216 20:58:19.843011 7926 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:20.061810 master-0 kubenswrapper[7926]: I0216 20:58:20.061759 7926 generic.go:334] "Generic (PLEG): container finished" podID="3d416d98-ee7c-4481-9721-861ccd91685d" containerID="8bbcb4e0fb94b168b2c18c0ad45486fda3e89c4340348d1ee5d8cea24b562c67" exitCode=0 Feb 16 20:58:20.062249 master-0 kubenswrapper[7926]: I0216 20:58:20.061816 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"3d416d98-ee7c-4481-9721-861ccd91685d","Type":"ContainerDied","Data":"8bbcb4e0fb94b168b2c18c0ad45486fda3e89c4340348d1ee5d8cea24b562c67"} Feb 16 20:58:20.416452 master-0 kubenswrapper[7926]: E0216 20:58:20.416318 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:58:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:21.069361 master-0 kubenswrapper[7926]: I0216 20:58:21.069263 7926 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="f06b93dc1f7853f1547eea454f40e687d56a498fbbe7a281e785547401b0538b" exitCode=1 Feb 16 20:58:21.070114 master-0 kubenswrapper[7926]: I0216 20:58:21.069378 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerDied","Data":"f06b93dc1f7853f1547eea454f40e687d56a498fbbe7a281e785547401b0538b"} Feb 16 20:58:21.070746 master-0 kubenswrapper[7926]: I0216 20:58:21.070593 7926 scope.go:117] "RemoveContainer" containerID="f06b93dc1f7853f1547eea454f40e687d56a498fbbe7a281e785547401b0538b" Feb 16 20:58:21.668343 master-0 kubenswrapper[7926]: I0216 20:58:21.668289 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 20:58:22.076531 master-0 kubenswrapper[7926]: I0216 20:58:22.076474 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/1.log" Feb 16 20:58:22.076531 master-0 kubenswrapper[7926]: I0216 20:58:22.076540 7926 generic.go:334] "Generic (PLEG): container finished" podID="695549c8-d1fc-429d-9c9f-0a5915dc6074" containerID="df4705117bc30301536972bb1ddb323a9cf1860379e92028207e9c158a991276" exitCode=1 Feb 16 20:58:22.077726 master-0 kubenswrapper[7926]: I0216 20:58:22.076577 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerDied","Data":"df4705117bc30301536972bb1ddb323a9cf1860379e92028207e9c158a991276"} Feb 16 20:58:22.077726 master-0 kubenswrapper[7926]: I0216 20:58:22.077104 7926 scope.go:117] "RemoveContainer" containerID="df4705117bc30301536972bb1ddb323a9cf1860379e92028207e9c158a991276" Feb 16 20:58:25.980385 master-0 kubenswrapper[7926]: I0216 20:58:25.980305 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:58:26.710789 master-0 kubenswrapper[7926]: I0216 20:58:26.710710 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:58:26.859425 master-0 kubenswrapper[7926]: I0216 20:58:26.859360 7926 patch_prober.go:28] interesting pod/authentication-operator-755d954778-8gnq5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Feb 16 20:58:26.859692 master-0 kubenswrapper[7926]: I0216 20:58:26.859434 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Feb 16 20:58:26.988236 master-0 kubenswrapper[7926]: I0216 20:58:26.988092 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:58:28.228302 master-0 kubenswrapper[7926]: I0216 20:58:28.228191 7926 scope.go:117] "RemoveContainer" containerID="3bfeaa29dd18a9c052679918402bc8ad83eaec394fa47c6b58ac63f5cfd4bce4" Feb 16 20:58:28.294960 master-0 kubenswrapper[7926]: W0216 20:58:28.293808 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod150fb1ff_8a9c_4360_8e41_cfbfb854d8bd.slice/crio-7c58d0ea6f77f570c6d69fca131a630124b55850297eb43a85d3d771ea9026d8 WatchSource:0}: Error finding container 7c58d0ea6f77f570c6d69fca131a630124b55850297eb43a85d3d771ea9026d8: Status 404 returned error can't find the container with id 7c58d0ea6f77f570c6d69fca131a630124b55850297eb43a85d3d771ea9026d8 Feb 16 20:58:28.318555 master-0 kubenswrapper[7926]: I0216 20:58:28.318516 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 20:58:28.365793 master-0 kubenswrapper[7926]: W0216 20:58:28.365380 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod401699cb53e7098157e808a83125b0e4.slice/crio-ee04407e73e43026553a6f7105bb915c9e7fa837f4a29869ed832ae43f4f1c8a WatchSource:0}: Error finding container ee04407e73e43026553a6f7105bb915c9e7fa837f4a29869ed832ae43f4f1c8a: Status 404 returned error can't find the container with id ee04407e73e43026553a6f7105bb915c9e7fa837f4a29869ed832ae43f4f1c8a Feb 16 20:58:28.483526 master-0 kubenswrapper[7926]: I0216 20:58:28.483346 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d416d98-ee7c-4481-9721-861ccd91685d-kube-api-access\") pod \"3d416d98-ee7c-4481-9721-861ccd91685d\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " Feb 16 20:58:28.485562 master-0 kubenswrapper[7926]: I0216 20:58:28.483397 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-kubelet-dir\") pod \"3d416d98-ee7c-4481-9721-861ccd91685d\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " Feb 16 20:58:28.485747 master-0 kubenswrapper[7926]: I0216 20:58:28.485631 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3d416d98-ee7c-4481-9721-861ccd91685d" (UID: "3d416d98-ee7c-4481-9721-861ccd91685d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:58:28.485747 master-0 kubenswrapper[7926]: I0216 20:58:28.485687 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-var-lock\") pod \"3d416d98-ee7c-4481-9721-861ccd91685d\" (UID: \"3d416d98-ee7c-4481-9721-861ccd91685d\") " Feb 16 20:58:28.486688 master-0 kubenswrapper[7926]: I0216 20:58:28.485996 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-var-lock" (OuterVolumeSpecName: "var-lock") pod "3d416d98-ee7c-4481-9721-861ccd91685d" (UID: "3d416d98-ee7c-4481-9721-861ccd91685d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:58:28.486688 master-0 kubenswrapper[7926]: I0216 20:58:28.486333 7926 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:28.486688 master-0 kubenswrapper[7926]: I0216 20:58:28.486348 7926 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d416d98-ee7c-4481-9721-861ccd91685d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:28.496249 master-0 kubenswrapper[7926]: I0216 20:58:28.496158 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d416d98-ee7c-4481-9721-861ccd91685d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3d416d98-ee7c-4481-9721-861ccd91685d" (UID: "3d416d98-ee7c-4481-9721-861ccd91685d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:58:28.590388 master-0 kubenswrapper[7926]: I0216 20:58:28.590275 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d416d98-ee7c-4481-9721-861ccd91685d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:29.122522 master-0 kubenswrapper[7926]: I0216 20:58:29.122468 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" event={"ID":"03a5021d-8a5c-4011-a9f9-c5eb38d5f236","Type":"ContainerStarted","Data":"79d00e7b83c00540b1c5d773a69fad9f225b26adf1e1722c924d805403fdfa8f"} Feb 16 20:58:29.124205 master-0 kubenswrapper[7926]: I0216 20:58:29.124172 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" event={"ID":"aa2e9bbc-3962-45f5-a7cc-2dc059409e70","Type":"ContainerStarted","Data":"a339e5c4723737e030c5a03c8395cedd263d3d5213cb12208bfe3004bbd0ef5e"} Feb 16 20:58:29.125475 master-0 kubenswrapper[7926]: I0216 20:58:29.125418 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" event={"ID":"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd","Type":"ContainerStarted","Data":"7c58d0ea6f77f570c6d69fca131a630124b55850297eb43a85d3d771ea9026d8"} Feb 16 20:58:29.126364 master-0 kubenswrapper[7926]: I0216 20:58:29.126330 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"3d416d98-ee7c-4481-9721-861ccd91685d","Type":"ContainerDied","Data":"363e6d9151e8f74d699facea1b9fd8436a80e76af370ce89bfd959fd35f30873"} Feb 16 20:58:29.126364 master-0 kubenswrapper[7926]: I0216 20:58:29.126356 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="363e6d9151e8f74d699facea1b9fd8436a80e76af370ce89bfd959fd35f30873" Feb 16 20:58:29.126462 master-0 kubenswrapper[7926]: I0216 20:58:29.126364 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 20:58:29.127773 master-0 kubenswrapper[7926]: I0216 20:58:29.127738 7926 generic.go:334] "Generic (PLEG): container finished" podID="97ec2c8c-e32c-4d18-ad78-0ef1f19557af" containerID="928561e2066beacfece2f741a5e8cac2ed26ae90d2d530ef0652836b9a124791" exitCode=0 Feb 16 20:58:29.127851 master-0 kubenswrapper[7926]: I0216 20:58:29.127799 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xv645" event={"ID":"97ec2c8c-e32c-4d18-ad78-0ef1f19557af","Type":"ContainerDied","Data":"928561e2066beacfece2f741a5e8cac2ed26ae90d2d530ef0652836b9a124791"} Feb 16 20:58:29.129995 master-0 kubenswrapper[7926]: I0216 20:58:29.129947 7926 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8" exitCode=0 Feb 16 20:58:29.130096 master-0 kubenswrapper[7926]: I0216 20:58:29.130024 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8"} Feb 16 20:58:29.130096 master-0 kubenswrapper[7926]: I0216 20:58:29.130049 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"ee04407e73e43026553a6f7105bb915c9e7fa837f4a29869ed832ae43f4f1c8a"} Feb 16 20:58:29.134803 master-0 kubenswrapper[7926]: I0216 20:58:29.134760 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" event={"ID":"1d7d0416-5f50-42bd-826b-92eecf9adcec","Type":"ContainerStarted","Data":"2805492f11ff17f7e51a6fba30471dee89ec93e40bd6ce6db4b158be70c75964"} Feb 16 20:58:29.136988 master-0 kubenswrapper[7926]: I0216 20:58:29.136944 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerStarted","Data":"c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a"} Feb 16 20:58:29.137603 master-0 kubenswrapper[7926]: I0216 20:58:29.137568 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 20:58:29.145881 master-0 kubenswrapper[7926]: I0216 20:58:29.145375 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a"} Feb 16 20:58:29.166810 master-0 kubenswrapper[7926]: I0216 20:58:29.166234 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"9c3555dd069a7df80fae789b4b23ce84596b7c133210eeb7b11b618ce5d733b4"} Feb 16 20:58:29.166810 master-0 kubenswrapper[7926]: I0216 20:58:29.166287 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8"} Feb 16 20:58:29.173021 master-0 kubenswrapper[7926]: I0216 20:58:29.172940 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" event={"ID":"55095f4f-cac0-456c-9ccc-45869392408c","Type":"ContainerStarted","Data":"d226aa39dd648190d8ac3bff9e2c7d5ebce52835f391db09e2359a199061478a"} Feb 16 20:58:29.173021 master-0 kubenswrapper[7926]: I0216 20:58:29.172991 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" event={"ID":"55095f4f-cac0-456c-9ccc-45869392408c","Type":"ContainerStarted","Data":"bed0c408affb572fccef4fee0aeb682072b214b567b0eac51edbbb5af21c22d5"} Feb 16 20:58:29.175140 master-0 kubenswrapper[7926]: I0216 20:58:29.175095 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhh2p" event={"ID":"9566b108-44e1-4d9e-8984-4c396dc4408c","Type":"ContainerStarted","Data":"c2ff8942463d287b82bf327999961ebd9e5c05160f4d3f6df586170d3bfafe1a"} Feb 16 20:58:29.180119 master-0 kubenswrapper[7926]: I0216 20:58:29.180059 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/1.log" Feb 16 20:58:29.180340 master-0 kubenswrapper[7926]: I0216 20:58:29.180199 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerStarted","Data":"da2d8128d877c8e59ec552f44d9719195718721aa40536dc7418200005684242"} Feb 16 20:58:29.183683 master-0 kubenswrapper[7926]: I0216 20:58:29.183659 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2lj6" event={"ID":"d4a6dcba-776f-48ba-b824-90ed5ae3abee","Type":"ContainerStarted","Data":"806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b"} Feb 16 20:58:29.198477 master-0 kubenswrapper[7926]: I0216 20:58:29.198398 7926 generic.go:334] "Generic (PLEG): container finished" podID="03593410-baa5-4edb-9d73-242a74f82987" containerID="b19589dbb6d4f7d3e5399c99620d53a3620f890047844d988256937f57f518e8" exitCode=0 Feb 16 20:58:29.198682 master-0 kubenswrapper[7926]: I0216 20:58:29.198599 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8vtc" event={"ID":"03593410-baa5-4edb-9d73-242a74f82987","Type":"ContainerDied","Data":"b19589dbb6d4f7d3e5399c99620d53a3620f890047844d988256937f57f518e8"} Feb 16 20:58:29.204272 master-0 kubenswrapper[7926]: I0216 20:58:29.202430 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" event={"ID":"e9615af2-cad5-4705-9c2f-6f3c97026100","Type":"ContainerStarted","Data":"dd23c2441236e3bdedd04adcd70f26ba2f2b37ed96fb0998ec94c3bbdca5b7da"} Feb 16 20:58:29.206241 master-0 kubenswrapper[7926]: I0216 20:58:29.205543 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e"} Feb 16 20:58:29.207281 master-0 kubenswrapper[7926]: I0216 20:58:29.207244 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" event={"ID":"ff193060-a272-4e4e-990a-83ac410f523d","Type":"ContainerStarted","Data":"9bb864e89f3ac9ffa49c4c67ddca01cba021221f4cf7bc201c305a5969704be4"} Feb 16 20:58:29.207281 master-0 kubenswrapper[7926]: I0216 20:58:29.207283 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" event={"ID":"ff193060-a272-4e4e-990a-83ac410f523d","Type":"ContainerStarted","Data":"f5d1b2f95d0f407ab1fdd5eb9fe9deae1b8e8d536d017cfe9a03861815d4f96a"} Feb 16 20:58:29.532768 master-0 kubenswrapper[7926]: I0216 20:58:29.528846 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xv645" Feb 16 20:58:29.532768 master-0 kubenswrapper[7926]: I0216 20:58:29.532128 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:58:29.623190 master-0 kubenswrapper[7926]: I0216 20:58:29.623141 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-catalog-content\") pod \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " Feb 16 20:58:29.623415 master-0 kubenswrapper[7926]: I0216 20:58:29.623258 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-utilities\") pod \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " Feb 16 20:58:29.623415 master-0 kubenswrapper[7926]: I0216 20:58:29.623312 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-utilities\") pod \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " Feb 16 20:58:29.623415 master-0 kubenswrapper[7926]: I0216 20:58:29.623339 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrtld\" (UniqueName: \"kubernetes.io/projected/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-kube-api-access-hrtld\") pod \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " Feb 16 20:58:29.623415 master-0 kubenswrapper[7926]: I0216 20:58:29.623368 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-catalog-content\") pod \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\" (UID: \"97ec2c8c-e32c-4d18-ad78-0ef1f19557af\") " Feb 16 20:58:29.623415 master-0 kubenswrapper[7926]: I0216 20:58:29.623406 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9l69\" (UniqueName: \"kubernetes.io/projected/d4a6dcba-776f-48ba-b824-90ed5ae3abee-kube-api-access-l9l69\") pod \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\" (UID: \"d4a6dcba-776f-48ba-b824-90ed5ae3abee\") " Feb 16 20:58:29.624247 master-0 kubenswrapper[7926]: I0216 20:58:29.624203 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-utilities" (OuterVolumeSpecName: "utilities") pod "97ec2c8c-e32c-4d18-ad78-0ef1f19557af" (UID: "97ec2c8c-e32c-4d18-ad78-0ef1f19557af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:58:29.624505 master-0 kubenswrapper[7926]: I0216 20:58:29.624472 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-utilities" (OuterVolumeSpecName: "utilities") pod "d4a6dcba-776f-48ba-b824-90ed5ae3abee" (UID: "d4a6dcba-776f-48ba-b824-90ed5ae3abee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:58:29.662081 master-0 kubenswrapper[7926]: I0216 20:58:29.662027 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4a6dcba-776f-48ba-b824-90ed5ae3abee" (UID: "d4a6dcba-776f-48ba-b824-90ed5ae3abee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:58:29.678693 master-0 kubenswrapper[7926]: I0216 20:58:29.678607 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97ec2c8c-e32c-4d18-ad78-0ef1f19557af" (UID: "97ec2c8c-e32c-4d18-ad78-0ef1f19557af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 20:58:29.844539 master-0 kubenswrapper[7926]: E0216 20:58:29.844450 7926 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Feb 16 20:58:30.418385 master-0 kubenswrapper[7926]: E0216 20:58:30.418340 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Feb 16 20:58:30.493621 master-0 kubenswrapper[7926]: I0216 20:58:30.493530 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a6dcba-776f-48ba-b824-90ed5ae3abee-kube-api-access-l9l69" (OuterVolumeSpecName: "kube-api-access-l9l69") pod "d4a6dcba-776f-48ba-b824-90ed5ae3abee" (UID: "d4a6dcba-776f-48ba-b824-90ed5ae3abee"). InnerVolumeSpecName "kube-api-access-l9l69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:58:30.494081 master-0 kubenswrapper[7926]: I0216 20:58:30.494024 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-kube-api-access-hrtld" (OuterVolumeSpecName: "kube-api-access-hrtld") pod "97ec2c8c-e32c-4d18-ad78-0ef1f19557af" (UID: "97ec2c8c-e32c-4d18-ad78-0ef1f19557af"). InnerVolumeSpecName "kube-api-access-hrtld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:58:30.494475 master-0 kubenswrapper[7926]: I0216 20:58:30.494413 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 20:58:30.494564 master-0 kubenswrapper[7926]: I0216 20:58:30.494476 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:30.499466 master-0 kubenswrapper[7926]: I0216 20:58:30.499375 7926 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:30.499664 master-0 kubenswrapper[7926]: I0216 20:58:30.499632 7926 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-utilities\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:30.499914 master-0 kubenswrapper[7926]: I0216 20:58:30.499744 7926 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4a6dcba-776f-48ba-b824-90ed5ae3abee-utilities\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:30.499914 master-0 kubenswrapper[7926]: I0216 20:58:30.499764 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrtld\" (UniqueName: \"kubernetes.io/projected/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-kube-api-access-hrtld\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:30.499914 master-0 kubenswrapper[7926]: I0216 20:58:30.499799 7926 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ec2c8c-e32c-4d18-ad78-0ef1f19557af-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:30.499914 master-0 kubenswrapper[7926]: I0216 20:58:30.499815 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9l69\" (UniqueName: \"kubernetes.io/projected/d4a6dcba-776f-48ba-b824-90ed5ae3abee-kube-api-access-l9l69\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:30.506945 master-0 kubenswrapper[7926]: I0216 20:58:30.506898 7926 generic.go:334] "Generic (PLEG): container finished" podID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerID="c2ff8942463d287b82bf327999961ebd9e5c05160f4d3f6df586170d3bfafe1a" exitCode=0 Feb 16 20:58:30.507009 master-0 kubenswrapper[7926]: I0216 20:58:30.506959 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhh2p" event={"ID":"9566b108-44e1-4d9e-8984-4c396dc4408c","Type":"ContainerDied","Data":"c2ff8942463d287b82bf327999961ebd9e5c05160f4d3f6df586170d3bfafe1a"} Feb 16 20:58:30.518339 master-0 kubenswrapper[7926]: I0216 20:58:30.518291 7926 generic.go:334] "Generic (PLEG): container finished" podID="d4a6dcba-776f-48ba-b824-90ed5ae3abee" containerID="806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b" exitCode=0 Feb 16 20:58:30.518426 master-0 kubenswrapper[7926]: I0216 20:58:30.518359 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2lj6" event={"ID":"d4a6dcba-776f-48ba-b824-90ed5ae3abee","Type":"ContainerDied","Data":"806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b"} Feb 16 20:58:30.518426 master-0 kubenswrapper[7926]: I0216 20:58:30.518389 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2lj6" event={"ID":"d4a6dcba-776f-48ba-b824-90ed5ae3abee","Type":"ContainerDied","Data":"098908342bebd0f7f9ce0402f1dd0bea51ed6e67a6ee85624a16f82f857d60f9"} Feb 16 20:58:30.518426 master-0 kubenswrapper[7926]: I0216 20:58:30.518406 7926 scope.go:117] "RemoveContainer" containerID="806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b" Feb 16 20:58:30.518518 master-0 kubenswrapper[7926]: I0216 20:58:30.518513 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2lj6" Feb 16 20:58:30.529903 master-0 kubenswrapper[7926]: I0216 20:58:30.529868 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xv645" event={"ID":"97ec2c8c-e32c-4d18-ad78-0ef1f19557af","Type":"ContainerDied","Data":"2079e16eb1d12b11a9c3315a75882203ebe24ec85035afd7621338cd504578d4"} Feb 16 20:58:30.529953 master-0 kubenswrapper[7926]: I0216 20:58:30.529939 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xv645" Feb 16 20:58:30.551253 master-0 kubenswrapper[7926]: I0216 20:58:30.551193 7926 scope.go:117] "RemoveContainer" containerID="46a258c72aa6c608e1111ee27c8210db148e57a72d2483481ebdbb919bab9811" Feb 16 20:58:30.583398 master-0 kubenswrapper[7926]: I0216 20:58:30.583354 7926 scope.go:117] "RemoveContainer" containerID="806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b" Feb 16 20:58:30.585522 master-0 kubenswrapper[7926]: E0216 20:58:30.585480 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b\": container with ID starting with 806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b not found: ID does not exist" containerID="806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b" Feb 16 20:58:30.585653 master-0 kubenswrapper[7926]: I0216 20:58:30.585527 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b"} err="failed to get container status \"806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b\": rpc error: code = NotFound desc = could not find container \"806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b\": container with ID starting with 806a95b325fa1585a7662fae05cbeab3f68c901ce5359848ec6a1a0e2738986b not found: ID does not exist" Feb 16 20:58:30.585653 master-0 kubenswrapper[7926]: I0216 20:58:30.585568 7926 scope.go:117] "RemoveContainer" containerID="46a258c72aa6c608e1111ee27c8210db148e57a72d2483481ebdbb919bab9811" Feb 16 20:58:30.585952 master-0 kubenswrapper[7926]: E0216 20:58:30.585920 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a258c72aa6c608e1111ee27c8210db148e57a72d2483481ebdbb919bab9811\": container with ID starting with 46a258c72aa6c608e1111ee27c8210db148e57a72d2483481ebdbb919bab9811 not found: ID does not exist" containerID="46a258c72aa6c608e1111ee27c8210db148e57a72d2483481ebdbb919bab9811" Feb 16 20:58:30.585952 master-0 kubenswrapper[7926]: I0216 20:58:30.585950 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a258c72aa6c608e1111ee27c8210db148e57a72d2483481ebdbb919bab9811"} err="failed to get container status \"46a258c72aa6c608e1111ee27c8210db148e57a72d2483481ebdbb919bab9811\": rpc error: code = NotFound desc = could not find container \"46a258c72aa6c608e1111ee27c8210db148e57a72d2483481ebdbb919bab9811\": container with ID starting with 46a258c72aa6c608e1111ee27c8210db148e57a72d2483481ebdbb919bab9811 not found: ID does not exist" Feb 16 20:58:30.586097 master-0 kubenswrapper[7926]: I0216 20:58:30.585967 7926 scope.go:117] "RemoveContainer" containerID="928561e2066beacfece2f741a5e8cac2ed26ae90d2d530ef0652836b9a124791" Feb 16 20:58:30.619342 master-0 kubenswrapper[7926]: E0216 20:58:30.619285 7926 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4a6dcba_776f_48ba_b824_90ed5ae3abee.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4a6dcba_776f_48ba_b824_90ed5ae3abee.slice/crio-098908342bebd0f7f9ce0402f1dd0bea51ed6e67a6ee85624a16f82f857d60f9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97ec2c8c_e32c_4d18_ad78_0ef1f19557af.slice/crio-2079e16eb1d12b11a9c3315a75882203ebe24ec85035afd7621338cd504578d4\": RecentStats: unable to find data in memory cache]" Feb 16 20:58:30.629899 master-0 kubenswrapper[7926]: I0216 20:58:30.629847 7926 scope.go:117] "RemoveContainer" containerID="1291ada8598d43ef0cbbde81989f1e8de61f7c3c643ca6fbf77da577e15fdf5b" Feb 16 20:58:31.530733 master-0 kubenswrapper[7926]: I0216 20:58:31.530561 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 20:58:31.530733 master-0 kubenswrapper[7926]: I0216 20:58:31.530691 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:31.539859 master-0 kubenswrapper[7926]: I0216 20:58:31.539752 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" event={"ID":"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd","Type":"ContainerStarted","Data":"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d"} Feb 16 20:58:32.543406 master-0 kubenswrapper[7926]: I0216 20:58:32.543264 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 20:58:32.543406 master-0 kubenswrapper[7926]: I0216 20:58:32.543353 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:32.550162 master-0 kubenswrapper[7926]: I0216 20:58:32.550119 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" event={"ID":"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd","Type":"ContainerStarted","Data":"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65"} Feb 16 20:58:32.550282 master-0 kubenswrapper[7926]: I0216 20:58:32.550168 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" event={"ID":"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd","Type":"ContainerStarted","Data":"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16"} Feb 16 20:58:32.551547 master-0 kubenswrapper[7926]: I0216 20:58:32.551498 7926 generic.go:334] "Generic (PLEG): container finished" podID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerID="a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204" exitCode=0 Feb 16 20:58:35.053519 master-0 kubenswrapper[7926]: I0216 20:58:35.053472 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcdctl/0.log" Feb 16 20:58:35.054030 master-0 kubenswrapper[7926]: I0216 20:58:35.053621 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:58:35.213158 master-0 kubenswrapper[7926]: I0216 20:58:35.213043 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"400a178a4d5e9a88ba5bbbd1da2ad15e\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " Feb 16 20:58:35.213158 master-0 kubenswrapper[7926]: I0216 20:58:35.213108 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"400a178a4d5e9a88ba5bbbd1da2ad15e\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " Feb 16 20:58:35.213361 master-0 kubenswrapper[7926]: I0216 20:58:35.213158 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir" (OuterVolumeSpecName: "data-dir") pod "400a178a4d5e9a88ba5bbbd1da2ad15e" (UID: "400a178a4d5e9a88ba5bbbd1da2ad15e"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:58:35.213361 master-0 kubenswrapper[7926]: I0216 20:58:35.213279 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs" (OuterVolumeSpecName: "certs") pod "400a178a4d5e9a88ba5bbbd1da2ad15e" (UID: "400a178a4d5e9a88ba5bbbd1da2ad15e"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:58:35.213431 master-0 kubenswrapper[7926]: I0216 20:58:35.213359 7926 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:35.213431 master-0 kubenswrapper[7926]: I0216 20:58:35.213376 7926 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 20:58:35.567220 master-0 kubenswrapper[7926]: I0216 20:58:35.567151 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcdctl/0.log" Feb 16 20:58:35.567220 master-0 kubenswrapper[7926]: I0216 20:58:35.567241 7926 generic.go:334] "Generic (PLEG): container finished" podID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerID="fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e" exitCode=137 Feb 16 20:58:35.567751 master-0 kubenswrapper[7926]: I0216 20:58:35.567290 7926 scope.go:117] "RemoveContainer" containerID="a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204" Feb 16 20:58:35.567751 master-0 kubenswrapper[7926]: I0216 20:58:35.567353 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:58:35.580590 master-0 kubenswrapper[7926]: I0216 20:58:35.580526 7926 scope.go:117] "RemoveContainer" containerID="fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e" Feb 16 20:58:35.595835 master-0 kubenswrapper[7926]: I0216 20:58:35.595793 7926 scope.go:117] "RemoveContainer" containerID="a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204" Feb 16 20:58:35.596301 master-0 kubenswrapper[7926]: E0216 20:58:35.596238 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204\": container with ID starting with a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204 not found: ID does not exist" containerID="a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204" Feb 16 20:58:35.596301 master-0 kubenswrapper[7926]: I0216 20:58:35.596281 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204"} err="failed to get container status \"a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204\": rpc error: code = NotFound desc = could not find container \"a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204\": container with ID starting with a3ef8c2f17e0843dbc7265db7f67c564c2c97d41bf1c253c3466338241e2b204 not found: ID does not exist" Feb 16 20:58:35.596301 master-0 kubenswrapper[7926]: I0216 20:58:35.596304 7926 scope.go:117] "RemoveContainer" containerID="fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e" Feb 16 20:58:35.597307 master-0 kubenswrapper[7926]: E0216 20:58:35.597261 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e\": container with ID starting with fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e not found: ID does not exist" containerID="fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e" Feb 16 20:58:35.597495 master-0 kubenswrapper[7926]: I0216 20:58:35.597298 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e"} err="failed to get container status \"fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e\": rpc error: code = NotFound desc = could not find container \"fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e\": container with ID starting with fea56a548bb1b40870646931b3ee24bfa53d974b5b14be8ecc57115395d0831e not found: ID does not exist" Feb 16 20:58:35.980132 master-0 kubenswrapper[7926]: I0216 20:58:35.979993 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:58:36.098550 master-0 kubenswrapper[7926]: I0216 20:58:36.098484 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:36.099168 master-0 kubenswrapper[7926]: I0216 20:58:36.098552 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:58:36.711146 master-0 kubenswrapper[7926]: I0216 20:58:36.711079 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 20:58:36.744914 master-0 kubenswrapper[7926]: I0216 20:58:36.744867 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" path="/var/lib/kubelet/pods/400a178a4d5e9a88ba5bbbd1da2ad15e/volumes" Feb 16 20:58:36.745304 master-0 kubenswrapper[7926]: I0216 20:58:36.745281 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 20:58:36.859429 master-0 kubenswrapper[7926]: I0216 20:58:36.859376 7926 patch_prober.go:28] interesting pod/authentication-operator-755d954778-8gnq5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Feb 16 20:58:36.859429 master-0 kubenswrapper[7926]: I0216 20:58:36.859427 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Feb 16 20:58:37.581422 master-0 kubenswrapper[7926]: I0216 20:58:37.581374 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_b09d3c16-18e3-45b3-9d39-949d2464b300/installer/0.log" Feb 16 20:58:37.581422 master-0 kubenswrapper[7926]: I0216 20:58:37.581421 7926 generic.go:334] "Generic (PLEG): container finished" podID="b09d3c16-18e3-45b3-9d39-949d2464b300" containerID="ab3f1bdaa87534b4aa1ea4a058dea3457c695cfe1da23ed41ae2ee089315bd08" exitCode=1 Feb 16 20:58:38.094987 master-0 kubenswrapper[7926]: I0216 20:58:38.094907 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 20:58:38.095239 master-0 kubenswrapper[7926]: I0216 20:58:38.094998 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:38.163824 master-0 kubenswrapper[7926]: I0216 20:58:38.163741 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:38.163824 master-0 kubenswrapper[7926]: I0216 20:58:38.163813 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:58:38.976380 master-0 kubenswrapper[7926]: E0216 20:58:38.976179 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894d5ab3d886886 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:04.911921286 +0000 UTC m=+56.546821586,LastTimestamp:2026-02-16 20:58:04.911921286 +0000 UTC m=+56.546821586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:58:39.098866 master-0 kubenswrapper[7926]: I0216 20:58:39.098786 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:39.099063 master-0 kubenswrapper[7926]: I0216 20:58:39.098882 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:58:39.711173 master-0 kubenswrapper[7926]: I0216 20:58:39.711101 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:39.845402 master-0 kubenswrapper[7926]: E0216 20:58:39.845311 7926 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:40.418865 master-0 kubenswrapper[7926]: E0216 20:58:40.418773 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:41.164080 master-0 kubenswrapper[7926]: I0216 20:58:41.164000 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:41.164333 master-0 kubenswrapper[7926]: I0216 20:58:41.164080 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:58:42.100119 master-0 kubenswrapper[7926]: I0216 20:58:42.100058 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:42.100628 master-0 kubenswrapper[7926]: I0216 20:58:42.100166 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:58:42.135822 master-0 kubenswrapper[7926]: E0216 20:58:42.135753 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 20:58:42.615914 master-0 kubenswrapper[7926]: I0216 20:58:42.615244 7926 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3" exitCode=0 Feb 16 20:58:43.622666 master-0 kubenswrapper[7926]: I0216 20:58:43.622601 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/0.log" Feb 16 20:58:43.623174 master-0 kubenswrapper[7926]: I0216 20:58:43.622714 7926 generic.go:334] "Generic (PLEG): container finished" podID="1b61063e-775e-421d-bf73-a6ef134293a0" containerID="22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc" exitCode=255 Feb 16 20:58:44.164804 master-0 kubenswrapper[7926]: I0216 20:58:44.164711 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:44.165066 master-0 kubenswrapper[7926]: I0216 20:58:44.164809 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:58:45.098837 master-0 kubenswrapper[7926]: I0216 20:58:45.098776 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:45.099301 master-0 kubenswrapper[7926]: I0216 20:58:45.098858 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:58:46.643396 master-0 kubenswrapper[7926]: I0216 20:58:46.643293 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965/installer/0.log" Feb 16 20:58:46.643396 master-0 kubenswrapper[7926]: I0216 20:58:46.643355 7926 generic.go:334] "Generic (PLEG): container finished" podID="9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965" containerID="5f4f1f7bf4711de84107b1c6040a91b2b71847aa5f151a70149a5a43fdbb16fc" exitCode=1 Feb 16 20:58:46.859383 master-0 kubenswrapper[7926]: I0216 20:58:46.859321 7926 patch_prober.go:28] interesting pod/authentication-operator-755d954778-8gnq5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Feb 16 20:58:46.859591 master-0 kubenswrapper[7926]: I0216 20:58:46.859389 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Feb 16 20:58:47.176108 master-0 kubenswrapper[7926]: I0216 20:58:47.176032 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dhh2p" podUID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerName="registry-server" probeResult="failure" output=< Feb 16 20:58:47.176108 master-0 kubenswrapper[7926]: timeout: failed to connect service ":50051" within 1s Feb 16 20:58:47.176108 master-0 kubenswrapper[7926]: > Feb 16 20:58:48.094282 master-0 kubenswrapper[7926]: I0216 20:58:48.094183 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 20:58:48.094282 master-0 kubenswrapper[7926]: I0216 20:58:48.094273 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:48.099394 master-0 kubenswrapper[7926]: I0216 20:58:48.099315 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:48.099625 master-0 kubenswrapper[7926]: I0216 20:58:48.099410 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:58:49.711680 master-0 kubenswrapper[7926]: I0216 20:58:49.711550 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:49.847081 master-0 kubenswrapper[7926]: E0216 20:58:49.846721 7926 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:50.419913 master-0 kubenswrapper[7926]: E0216 20:58:50.419860 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:51.099432 master-0 kubenswrapper[7926]: I0216 20:58:51.099323 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:51.100072 master-0 kubenswrapper[7926]: I0216 20:58:51.099434 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:58:51.673848 master-0 kubenswrapper[7926]: I0216 20:58:51.673750 7926 generic.go:334] "Generic (PLEG): container finished" podID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" containerID="8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669" exitCode=0 Feb 16 20:58:54.099468 master-0 kubenswrapper[7926]: I0216 20:58:54.099380 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:54.099468 master-0 kubenswrapper[7926]: I0216 20:58:54.099442 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:58:55.696352 master-0 kubenswrapper[7926]: I0216 20:58:55.696255 7926 generic.go:334] "Generic (PLEG): container finished" podID="2ab0a907-7abe-4808-ba21-bdda1506eae2" containerID="0e76905998b63e1ca06bb636f257a337f36ba01b7d03a406ab7d6fa3bdb3b545" exitCode=0 Feb 16 20:58:57.098553 master-0 kubenswrapper[7926]: I0216 20:58:57.098489 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:58:57.099180 master-0 kubenswrapper[7926]: I0216 20:58:57.098556 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:58:58.094497 master-0 kubenswrapper[7926]: I0216 20:58:58.094413 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 20:58:58.094766 master-0 kubenswrapper[7926]: I0216 20:58:58.094528 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:59.713927 master-0 kubenswrapper[7926]: I0216 20:58:59.713833 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:59.721512 master-0 kubenswrapper[7926]: I0216 20:58:59.721439 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/0.log" Feb 16 20:58:59.721512 master-0 kubenswrapper[7926]: I0216 20:58:59.721486 7926 generic.go:334] "Generic (PLEG): container finished" podID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerID="c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a" exitCode=255 Feb 16 20:58:59.723425 master-0 kubenswrapper[7926]: I0216 20:58:59.723393 7926 generic.go:334] "Generic (PLEG): container finished" podID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerID="58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372" exitCode=0 Feb 16 20:58:59.847375 master-0 kubenswrapper[7926]: E0216 20:58:59.847231 7926 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 20:58:59.847375 master-0 kubenswrapper[7926]: I0216 20:58:59.847306 7926 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 20:59:00.099121 master-0 kubenswrapper[7926]: I0216 20:59:00.099005 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:00.099121 master-0 kubenswrapper[7926]: I0216 20:59:00.099077 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:00.421460 master-0 kubenswrapper[7926]: E0216 20:59:00.421335 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 20:59:00.421460 master-0 kubenswrapper[7926]: E0216 20:59:00.421385 7926 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 20:59:03.099293 master-0 kubenswrapper[7926]: I0216 20:59:03.099208 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:03.100006 master-0 kubenswrapper[7926]: I0216 20:59:03.099296 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:05.753077 master-0 kubenswrapper[7926]: I0216 20:59:05.752981 7926 generic.go:334] "Generic (PLEG): container finished" podID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" containerID="a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c" exitCode=0 Feb 16 20:59:05.755187 master-0 kubenswrapper[7926]: I0216 20:59:05.755114 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-tpj6f_88f19cea-60ed-4977-a906-75deec51fc3d/approver/0.log" Feb 16 20:59:05.755662 master-0 kubenswrapper[7926]: I0216 20:59:05.755583 7926 generic.go:334] "Generic (PLEG): container finished" podID="88f19cea-60ed-4977-a906-75deec51fc3d" containerID="d0734d0596c43a54e8c5763783b157c38da058f6ee7d80add1702898fd0efe5d" exitCode=1 Feb 16 20:59:05.757126 master-0 kubenswrapper[7926]: I0216 20:59:05.757092 7926 generic.go:334] "Generic (PLEG): container finished" podID="0b02b740-5698-4e9a-90fe-2873bd0b0958" containerID="6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9" exitCode=0 Feb 16 20:59:06.099156 master-0 kubenswrapper[7926]: I0216 20:59:06.098989 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:06.099397 master-0 kubenswrapper[7926]: I0216 20:59:06.099175 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:07.094229 master-0 kubenswrapper[7926]: I0216 20:59:07.094156 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 20:59:07.095145 master-0 kubenswrapper[7926]: I0216 20:59:07.094177 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 20:59:07.095145 master-0 kubenswrapper[7926]: I0216 20:59:07.094871 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 20:59:07.095145 master-0 kubenswrapper[7926]: I0216 20:59:07.094933 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 20:59:08.755791 master-0 kubenswrapper[7926]: I0216 20:59:08.755691 7926 status_manager.go:851] "Failed to get status for pod" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-baremetal-operator-7bc947fc7d-xwptz)" Feb 16 20:59:09.098645 master-0 kubenswrapper[7926]: I0216 20:59:09.098571 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:09.098913 master-0 kubenswrapper[7926]: I0216 20:59:09.098702 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:09.848143 master-0 kubenswrapper[7926]: E0216 20:59:09.847982 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 16 20:59:10.574642 master-0 kubenswrapper[7926]: E0216 20:59:10.574586 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 20:59:10.574642 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951" Netns:"/var/run/netns/19342e54-e358-4dd5-8f26-04f4fba71b37" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:10.574642 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:10.574642 master-0 kubenswrapper[7926]: > Feb 16 20:59:10.574909 master-0 kubenswrapper[7926]: E0216 20:59:10.574689 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 20:59:10.574909 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951" Netns:"/var/run/netns/19342e54-e358-4dd5-8f26-04f4fba71b37" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:10.574909 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:10.574909 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:59:10.574909 master-0 kubenswrapper[7926]: E0216 20:59:10.574736 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 20:59:10.574909 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951" Netns:"/var/run/netns/19342e54-e358-4dd5-8f26-04f4fba71b37" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:10.574909 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:10.574909 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:59:10.574909 master-0 kubenswrapper[7926]: E0216 20:59:10.574800 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-j5kwc_openshift-marketplace(ce229d27-837d-4a98-80fc-d56877ae39b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-j5kwc_openshift-marketplace(ce229d27-837d-4a98-80fc-d56877ae39b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951\\\" Netns:\\\"/var/run/netns/19342e54-e358-4dd5-8f26-04f4fba71b37\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/community-operators-j5kwc" podUID="ce229d27-837d-4a98-80fc-d56877ae39b8" Feb 16 20:59:10.747913 master-0 kubenswrapper[7926]: E0216 20:59:10.747857 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:59:10.748151 master-0 kubenswrapper[7926]: E0216 20:59:10.748045 7926 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.01s" Feb 16 20:59:10.748151 master-0 kubenswrapper[7926]: I0216 20:59:10.748069 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:59:10.748151 master-0 kubenswrapper[7926]: I0216 20:59:10.748118 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 20:59:10.748926 master-0 kubenswrapper[7926]: I0216 20:59:10.748885 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:10.749001 master-0 kubenswrapper[7926]: I0216 20:59:10.748934 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:10.749310 master-0 kubenswrapper[7926]: I0216 20:59:10.749261 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112"} pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 16 20:59:10.749364 master-0 kubenswrapper[7926]: I0216 20:59:10.749333 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" containerID="cri-o://f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112" gracePeriod=30 Feb 16 20:59:10.755088 master-0 kubenswrapper[7926]: I0216 20:59:10.755036 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 20:59:10.799410 master-0 kubenswrapper[7926]: I0216 20:59:10.799361 7926 generic.go:334] "Generic (PLEG): container finished" podID="59237aa6-6250-4619-8ee5-abae59f04b57" containerID="f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112" exitCode=0 Feb 16 20:59:10.802115 master-0 kubenswrapper[7926]: I0216 20:59:10.802077 7926 generic.go:334] "Generic (PLEG): container finished" podID="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" containerID="75d7b146641140c312956826b413c80f7862cac93292ebbdd2b6b13f8e1b06a3" exitCode=0 Feb 16 20:59:10.803349 master-0 kubenswrapper[7926]: I0216 20:59:10.803317 7926 generic.go:334] "Generic (PLEG): container finished" podID="e7adbe32-b8b9-438e-a2e3-f93146a97424" containerID="34f0b2189e90cc7801c4026c4ab900cc1fc9f5ac2f006e83f5fec81671df191f" exitCode=0 Feb 16 20:59:10.803418 master-0 kubenswrapper[7926]: I0216 20:59:10.803400 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:59:10.803844 master-0 kubenswrapper[7926]: I0216 20:59:10.803821 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 20:59:12.099385 master-0 kubenswrapper[7926]: I0216 20:59:12.099295 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:12.099385 master-0 kubenswrapper[7926]: I0216 20:59:12.099374 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:12.978168 master-0 kubenswrapper[7926]: E0216 20:59:12.978023 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{insights-operator-cb4f7b4cf-h8f7q.1894d5abcf6b7062 openshift-insights 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-insights,Name:insights-operator-cb4f7b4cf-h8f7q,UID:e9615af2-cad5-4705-9c2f-6f3c97026100,APIVersion:v1,ResourceVersion:8131,FieldPath:spec.containers{insights-operator},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:07.35949629 +0000 UTC m=+58.994396590,LastTimestamp:2026-02-16 20:58:07.35949629 +0000 UTC m=+58.994396590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:59:15.099488 master-0 kubenswrapper[7926]: I0216 20:59:15.099406 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:15.100285 master-0 kubenswrapper[7926]: I0216 20:59:15.099496 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:17.094865 master-0 kubenswrapper[7926]: I0216 20:59:17.094630 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 20:59:17.094865 master-0 kubenswrapper[7926]: I0216 20:59:17.094733 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 20:59:17.094865 master-0 kubenswrapper[7926]: I0216 20:59:17.094733 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 20:59:17.094865 master-0 kubenswrapper[7926]: I0216 20:59:17.094804 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 20:59:17.353497 master-0 kubenswrapper[7926]: E0216 20:59:17.353441 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 20:59:17.353497 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432" Netns:"/var/run/netns/15f4adda-761d-4dea-a261-539075462cc6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:17.353497 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:17.353497 master-0 kubenswrapper[7926]: > Feb 16 20:59:17.353743 master-0 kubenswrapper[7926]: E0216 20:59:17.353517 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 20:59:17.353743 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432" Netns:"/var/run/netns/15f4adda-761d-4dea-a261-539075462cc6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:17.353743 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:17.353743 master-0 kubenswrapper[7926]: > pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:59:17.353743 master-0 kubenswrapper[7926]: E0216 20:59:17.353538 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 20:59:17.353743 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432" Netns:"/var/run/netns/15f4adda-761d-4dea-a261-539075462cc6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:17.353743 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:17.353743 master-0 kubenswrapper[7926]: > pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:59:17.353743 master-0 kubenswrapper[7926]: E0216 20:59:17.353605 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager(319dc882-e1f5-40f9-99f4-2bae028337e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager(319dc882-e1f5-40f9-99f4-2bae028337e5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432\\\" Netns:\\\"/var/run/netns/15f4adda-761d-4dea-a261-539075462cc6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" Feb 16 20:59:17.846399 master-0 kubenswrapper[7926]: I0216 20:59:17.846325 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:59:17.847391 master-0 kubenswrapper[7926]: I0216 20:59:17.847300 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 20:59:18.099157 master-0 kubenswrapper[7926]: I0216 20:59:18.098976 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:18.099157 master-0 kubenswrapper[7926]: I0216 20:59:18.099059 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:20.050358 master-0 kubenswrapper[7926]: E0216 20:59:20.050238 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 16 20:59:20.744467 master-0 kubenswrapper[7926]: E0216 20:59:20.744213 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:59:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:59:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:59:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T20:59:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e2f869b1c4f98a628b2e54c1516a0d0c09c760c91e0e1a940cb76149217661b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:97930d07a108f20287bd5ceb046a5ab125604b2e3564077db9f7d7c077cc5852\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701129928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:0b4dc203ac00318362470f07842ed97dc1c724d32fa07c1613f15fcf4bf54ec8\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cc6c845176bbdca205e7c9628ea993ed70da3b2516bac35d68d9f52059fad674\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234421961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:06dcb25b4ae74ef159663cc2318f84e4665c7889b38ed62940259e5edd2b576f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a81101fb2bf3c75acf3e62bf09b19b67bccbde0faf09bd379a491f5eadb8afc1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213098166},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13\\\"],\\\"sizeBytes\\\":875178413},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471\\\"],\\\"sizeBytes\\\":552251951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\\\"],\\\"sizeBytes\\\":508404525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30\\\"],\\\"sizeBytes\\\":499489508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49\\\"],\\\"sizeBytes\\\":465648392},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb\\\"],\\\"sizeBytes\\\":465507019},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d\\\"],\\\"sizeBytes\\\":462065055}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes master-0)" Feb 16 20:59:21.098191 master-0 kubenswrapper[7926]: I0216 20:59:21.098111 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:21.098191 master-0 kubenswrapper[7926]: I0216 20:59:21.098163 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:24.098712 master-0 kubenswrapper[7926]: I0216 20:59:24.098633 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:24.099171 master-0 kubenswrapper[7926]: I0216 20:59:24.098733 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:27.094002 master-0 kubenswrapper[7926]: I0216 20:59:27.093923 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 20:59:27.094939 master-0 kubenswrapper[7926]: I0216 20:59:27.094011 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 20:59:27.094939 master-0 kubenswrapper[7926]: I0216 20:59:27.094088 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 20:59:27.094939 master-0 kubenswrapper[7926]: I0216 20:59:27.094005 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 20:59:27.098870 master-0 kubenswrapper[7926]: I0216 20:59:27.098831 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:27.099243 master-0 kubenswrapper[7926]: I0216 20:59:27.099054 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:28.908741 master-0 kubenswrapper[7926]: I0216 20:59:28.908689 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/0.log" Feb 16 20:59:28.908741 master-0 kubenswrapper[7926]: I0216 20:59:28.908743 7926 generic.go:334] "Generic (PLEG): container finished" podID="8b648d9e-a892-4951-b0e2-fed6b16273d4" containerID="ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8" exitCode=1 Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: E0216 20:59:28.985713 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3" Netns:"/var/run/netns/259dba6e-6b00-46be-ba0c-a43361e7e48c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: > Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: E0216 20:59:28.985784 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3" Netns:"/var/run/netns/259dba6e-6b00-46be-ba0c-a43361e7e48c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: E0216 20:59:28.985807 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3" Netns:"/var/run/netns/259dba6e-6b00-46be-ba0c-a43361e7e48c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:59:28.985933 master-0 kubenswrapper[7926]: E0216 20:59:28.985876 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-sn2nh_openshift-marketplace(f275e79f-923c-4d3a-8ed4-084a122ddcf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-sn2nh_openshift-marketplace(f275e79f-923c-4d3a-8ed4-084a122ddcf4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3\\\" Netns:\\\"/var/run/netns/259dba6e-6b00-46be-ba0c-a43361e7e48c\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-sn2nh" podUID="f275e79f-923c-4d3a-8ed4-084a122ddcf4" Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: E0216 20:59:29.010573 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190" Netns:"/var/run/netns/fa83b52f-64f2-4d3b-b725-49e7a507dc56" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: > Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: E0216 20:59:29.010666 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190" Netns:"/var/run/netns/fa83b52f-64f2-4d3b-b725-49e7a507dc56" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: > pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: E0216 20:59:29.010696 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190" Netns:"/var/run/netns/fa83b52f-64f2-4d3b-b725-49e7a507dc56" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: > pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:59:29.010878 master-0 kubenswrapper[7926]: E0216 20:59:29.010772 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api(ba294358-051a-4f09-b182-710d3d6778c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api(ba294358-051a-4f09-b182-710d3d6778c5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190\\\" Netns:\\\"/var/run/netns/fa83b52f-64f2-4d3b-b725-49e7a507dc56\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" podUID="ba294358-051a-4f09-b182-710d3d6778c5" Feb 16 20:59:29.915098 master-0 kubenswrapper[7926]: I0216 20:59:29.915029 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:59:29.915098 master-0 kubenswrapper[7926]: I0216 20:59:29.915078 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:59:29.915753 master-0 kubenswrapper[7926]: I0216 20:59:29.915715 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 20:59:29.916064 master-0 kubenswrapper[7926]: I0216 20:59:29.916021 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 20:59:30.098540 master-0 kubenswrapper[7926]: I0216 20:59:30.098480 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:30.098670 master-0 kubenswrapper[7926]: I0216 20:59:30.098561 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:30.451671 master-0 kubenswrapper[7926]: E0216 20:59:30.451583 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="800ms" Feb 16 20:59:30.745228 master-0 kubenswrapper[7926]: E0216 20:59:30.745058 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 20:59:30.922517 master-0 kubenswrapper[7926]: I0216 20:59:30.922425 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a" exitCode=1 Feb 16 20:59:33.099955 master-0 kubenswrapper[7926]: I0216 20:59:33.098822 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:33.101266 master-0 kubenswrapper[7926]: I0216 20:59:33.100060 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:34.944357 master-0 kubenswrapper[7926]: I0216 20:59:34.944253 7926 generic.go:334] "Generic (PLEG): container finished" podID="e9615af2-cad5-4705-9c2f-6f3c97026100" containerID="dd23c2441236e3bdedd04adcd70f26ba2f2b37ed96fb0998ec94c3bbdca5b7da" exitCode=0 Feb 16 20:59:36.099253 master-0 kubenswrapper[7926]: I0216 20:59:36.099178 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:36.099782 master-0 kubenswrapper[7926]: I0216 20:59:36.099271 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:37.093850 master-0 kubenswrapper[7926]: I0216 20:59:37.093783 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 20:59:37.094071 master-0 kubenswrapper[7926]: I0216 20:59:37.093850 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 20:59:39.099088 master-0 kubenswrapper[7926]: I0216 20:59:39.098992 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:39.099088 master-0 kubenswrapper[7926]: I0216 20:59:39.099079 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:40.745780 master-0 kubenswrapper[7926]: E0216 20:59:40.745733 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 20:59:41.252736 master-0 kubenswrapper[7926]: E0216 20:59:41.252639 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="1.6s" Feb 16 20:59:42.098984 master-0 kubenswrapper[7926]: I0216 20:59:42.098783 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:42.098984 master-0 kubenswrapper[7926]: I0216 20:59:42.098877 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:44.760684 master-0 kubenswrapper[7926]: E0216 20:59:44.760577 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 20:59:44.761980 master-0 kubenswrapper[7926]: E0216 20:59:44.760922 7926 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.013s" Feb 16 20:59:44.761980 master-0 kubenswrapper[7926]: I0216 20:59:44.761081 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 20:59:44.763210 master-0 kubenswrapper[7926]: I0216 20:59:44.763132 7926 scope.go:117] "RemoveContainer" containerID="8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669" Feb 16 20:59:44.763377 master-0 kubenswrapper[7926]: I0216 20:59:44.763328 7926 scope.go:117] "RemoveContainer" containerID="6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9" Feb 16 20:59:44.763478 master-0 kubenswrapper[7926]: I0216 20:59:44.763425 7926 scope.go:117] "RemoveContainer" containerID="a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c" Feb 16 20:59:44.765153 master-0 kubenswrapper[7926]: I0216 20:59:44.764369 7926 scope.go:117] "RemoveContainer" containerID="22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc" Feb 16 20:59:44.766095 master-0 kubenswrapper[7926]: I0216 20:59:44.765917 7926 scope.go:117] "RemoveContainer" containerID="ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8" Feb 16 20:59:44.766338 master-0 kubenswrapper[7926]: I0216 20:59:44.766252 7926 scope.go:117] "RemoveContainer" containerID="d0734d0596c43a54e8c5763783b157c38da058f6ee7d80add1702898fd0efe5d" Feb 16 20:59:44.768918 master-0 kubenswrapper[7926]: I0216 20:59:44.768873 7926 scope.go:117] "RemoveContainer" containerID="58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372" Feb 16 20:59:44.777199 master-0 kubenswrapper[7926]: I0216 20:59:44.776910 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 20:59:45.098381 master-0 kubenswrapper[7926]: I0216 20:59:45.098334 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:45.098539 master-0 kubenswrapper[7926]: I0216 20:59:45.098396 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:45.310309 master-0 kubenswrapper[7926]: I0216 20:59:45.310264 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965/installer/0.log" Feb 16 20:59:45.310425 master-0 kubenswrapper[7926]: I0216 20:59:45.310335 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:59:45.313744 master-0 kubenswrapper[7926]: I0216 20:59:45.313711 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_b09d3c16-18e3-45b3-9d39-949d2464b300/installer/0.log" Feb 16 20:59:45.313810 master-0 kubenswrapper[7926]: I0216 20:59:45.313772 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:59:45.328989 master-0 kubenswrapper[7926]: I0216 20:59:45.328938 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-kubelet-dir\") pod \"b09d3c16-18e3-45b3-9d39-949d2464b300\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " Feb 16 20:59:45.329097 master-0 kubenswrapper[7926]: I0216 20:59:45.329017 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b09d3c16-18e3-45b3-9d39-949d2464b300-kube-api-access\") pod \"b09d3c16-18e3-45b3-9d39-949d2464b300\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " Feb 16 20:59:45.329097 master-0 kubenswrapper[7926]: I0216 20:59:45.329059 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kubelet-dir\") pod \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " Feb 16 20:59:45.329097 master-0 kubenswrapper[7926]: I0216 20:59:45.329053 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b09d3c16-18e3-45b3-9d39-949d2464b300" (UID: "b09d3c16-18e3-45b3-9d39-949d2464b300"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:59:45.329213 master-0 kubenswrapper[7926]: I0216 20:59:45.329123 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-var-lock\") pod \"b09d3c16-18e3-45b3-9d39-949d2464b300\" (UID: \"b09d3c16-18e3-45b3-9d39-949d2464b300\") " Feb 16 20:59:45.329213 master-0 kubenswrapper[7926]: I0216 20:59:45.329152 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-var-lock\") pod \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " Feb 16 20:59:45.329213 master-0 kubenswrapper[7926]: I0216 20:59:45.329181 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kube-api-access\") pod \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\" (UID: \"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965\") " Feb 16 20:59:45.329302 master-0 kubenswrapper[7926]: I0216 20:59:45.329213 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965" (UID: "9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:59:45.329302 master-0 kubenswrapper[7926]: I0216 20:59:45.329231 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-var-lock" (OuterVolumeSpecName: "var-lock") pod "b09d3c16-18e3-45b3-9d39-949d2464b300" (UID: "b09d3c16-18e3-45b3-9d39-949d2464b300"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:59:45.329302 master-0 kubenswrapper[7926]: I0216 20:59:45.329265 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-var-lock" (OuterVolumeSpecName: "var-lock") pod "9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965" (UID: "9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 20:59:45.329430 master-0 kubenswrapper[7926]: I0216 20:59:45.329404 7926 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 20:59:45.329430 master-0 kubenswrapper[7926]: I0216 20:59:45.329422 7926 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 20:59:45.329430 master-0 kubenswrapper[7926]: I0216 20:59:45.329432 7926 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b09d3c16-18e3-45b3-9d39-949d2464b300-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 20:59:45.329522 master-0 kubenswrapper[7926]: I0216 20:59:45.329442 7926 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 20:59:45.331602 master-0 kubenswrapper[7926]: I0216 20:59:45.331574 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b09d3c16-18e3-45b3-9d39-949d2464b300-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b09d3c16-18e3-45b3-9d39-949d2464b300" (UID: "b09d3c16-18e3-45b3-9d39-949d2464b300"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:45.331675 master-0 kubenswrapper[7926]: I0216 20:59:45.331664 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965" (UID: "9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 20:59:45.430231 master-0 kubenswrapper[7926]: I0216 20:59:45.430132 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b09d3c16-18e3-45b3-9d39-949d2464b300-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 20:59:45.430231 master-0 kubenswrapper[7926]: I0216 20:59:45.430173 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 20:59:46.009339 master-0 kubenswrapper[7926]: I0216 20:59:46.009283 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965/installer/0.log" Feb 16 20:59:46.009866 master-0 kubenswrapper[7926]: I0216 20:59:46.009480 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 20:59:46.011794 master-0 kubenswrapper[7926]: I0216 20:59:46.011768 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/0.log" Feb 16 20:59:46.019505 master-0 kubenswrapper[7926]: I0216 20:59:46.019465 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/0.log" Feb 16 20:59:46.024074 master-0 kubenswrapper[7926]: I0216 20:59:46.024048 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-tpj6f_88f19cea-60ed-4977-a906-75deec51fc3d/approver/0.log" Feb 16 20:59:46.026520 master-0 kubenswrapper[7926]: I0216 20:59:46.026481 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_b09d3c16-18e3-45b3-9d39-949d2464b300/installer/0.log" Feb 16 20:59:46.026702 master-0 kubenswrapper[7926]: I0216 20:59:46.026642 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 20:59:46.980566 master-0 kubenswrapper[7926]: E0216 20:59:46.980396 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cluster-storage-operator-75b869db96-g4w5m.1894d5abd008b039 openshift-cluster-storage-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-cluster-storage-operator,Name:cluster-storage-operator-75b869db96-g4w5m,UID:aa2e9bbc-3962-45f5-a7cc-2dc059409e70,APIVersion:v1,ResourceVersion:8194,FieldPath:spec.containers{cluster-storage-operator},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:07.369801785 +0000 UTC m=+59.004702085,LastTimestamp:2026-02-16 20:58:07.369801785 +0000 UTC m=+59.004702085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 20:59:47.094096 master-0 kubenswrapper[7926]: I0216 20:59:47.093990 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 20:59:47.094096 master-0 kubenswrapper[7926]: I0216 20:59:47.094060 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 20:59:48.099538 master-0 kubenswrapper[7926]: I0216 20:59:48.099410 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:48.099538 master-0 kubenswrapper[7926]: I0216 20:59:48.099518 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:50.747727 master-0 kubenswrapper[7926]: E0216 20:59:50.747565 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 20:59:51.075726 master-0 kubenswrapper[7926]: I0216 20:59:51.075472 7926 generic.go:334] "Generic (PLEG): container finished" podID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerID="1fdce62d33ee01800252ab5e608745339a8f0dbc0ccac60559c706daa3409f0f" exitCode=0 Feb 16 20:59:51.098809 master-0 kubenswrapper[7926]: I0216 20:59:51.098590 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:51.099193 master-0 kubenswrapper[7926]: I0216 20:59:51.098876 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:51.659481 master-0 kubenswrapper[7926]: I0216 20:59:51.659364 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 20:59:51.659849 master-0 kubenswrapper[7926]: I0216 20:59:51.659470 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 20:59:51.659849 master-0 kubenswrapper[7926]: I0216 20:59:51.659495 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 20:59:51.659849 master-0 kubenswrapper[7926]: I0216 20:59:51.659580 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 20:59:52.082534 master-0 kubenswrapper[7926]: I0216 20:59:52.082453 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-qzs2g_1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/manager/0.log" Feb 16 20:59:52.083131 master-0 kubenswrapper[7926]: I0216 20:59:52.082533 7926 generic.go:334] "Generic (PLEG): container finished" podID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerID="b1ac78292de0a544c15af274111c4e933c90f41d601dad32fc19d3dacdb54345" exitCode=1 Feb 16 20:59:52.085339 master-0 kubenswrapper[7926]: I0216 20:59:52.085288 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-8kdgg_e8194cdc-3133-49e2-9579-a747c0bf2b16/manager/0.log" Feb 16 20:59:52.085863 master-0 kubenswrapper[7926]: I0216 20:59:52.085812 7926 generic.go:334] "Generic (PLEG): container finished" podID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerID="a76963335874f22d97778041d73ee6a0a7e3ffd325f9fb8a457626be3c8e5238" exitCode=1 Feb 16 20:59:52.855161 master-0 kubenswrapper[7926]: E0216 20:59:52.855039 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 16 20:59:54.099398 master-0 kubenswrapper[7926]: I0216 20:59:54.099299 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:54.100535 master-0 kubenswrapper[7926]: I0216 20:59:54.100483 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:56.115217 master-0 kubenswrapper[7926]: I0216 20:59:56.115114 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/0.log" Feb 16 20:59:56.115217 master-0 kubenswrapper[7926]: I0216 20:59:56.115176 7926 generic.go:334] "Generic (PLEG): container finished" podID="b1ac9776-54c4-46ce-b898-01c8cf35e593" containerID="6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c" exitCode=1 Feb 16 20:59:56.834157 master-0 kubenswrapper[7926]: I0216 20:59:56.834078 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 20:59:56.834157 master-0 kubenswrapper[7926]: I0216 20:59:56.834139 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 20:59:56.834437 master-0 kubenswrapper[7926]: I0216 20:59:56.834090 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 20:59:56.834437 master-0 kubenswrapper[7926]: I0216 20:59:56.834189 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 20:59:56.917325 master-0 kubenswrapper[7926]: I0216 20:59:56.917253 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 20:59:56.917579 master-0 kubenswrapper[7926]: I0216 20:59:56.917332 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 20:59:56.917579 master-0 kubenswrapper[7926]: I0216 20:59:56.917423 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 20:59:56.917579 master-0 kubenswrapper[7926]: I0216 20:59:56.917508 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 20:59:57.094727 master-0 kubenswrapper[7926]: I0216 20:59:57.094508 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 20:59:57.094727 master-0 kubenswrapper[7926]: I0216 20:59:57.094614 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 20:59:57.098629 master-0 kubenswrapper[7926]: I0216 20:59:57.098541 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 20:59:57.098842 master-0 kubenswrapper[7926]: I0216 20:59:57.098640 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 20:59:59.140218 master-0 kubenswrapper[7926]: I0216 20:59:59.140099 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/2.log" Feb 16 20:59:59.141539 master-0 kubenswrapper[7926]: I0216 20:59:59.141471 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/1.log" Feb 16 20:59:59.141679 master-0 kubenswrapper[7926]: I0216 20:59:59.141550 7926 generic.go:334] "Generic (PLEG): container finished" podID="695549c8-d1fc-429d-9c9f-0a5915dc6074" containerID="da2d8128d877c8e59ec552f44d9719195718721aa40536dc7418200005684242" exitCode=255 Feb 16 21:00:00.098690 master-0 kubenswrapper[7926]: I0216 21:00:00.098581 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:00:00.098971 master-0 kubenswrapper[7926]: I0216 21:00:00.098748 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:00:00.153083 master-0 kubenswrapper[7926]: I0216 21:00:00.152954 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/0.log" Feb 16 21:00:00.153083 master-0 kubenswrapper[7926]: I0216 21:00:00.153056 7926 generic.go:334] "Generic (PLEG): container finished" podID="aa2e9bbc-3962-45f5-a7cc-2dc059409e70" containerID="a339e5c4723737e030c5a03c8395cedd263d3d5213cb12208bfe3004bbd0ef5e" exitCode=255 Feb 16 21:00:00.748404 master-0 kubenswrapper[7926]: E0216 21:00:00.748311 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:00.748404 master-0 kubenswrapper[7926]: E0216 21:00:00.748385 7926 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:00:01.159852 master-0 kubenswrapper[7926]: I0216 21:00:01.159804 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/0.log" Feb 16 21:00:01.159852 master-0 kubenswrapper[7926]: I0216 21:00:01.159868 7926 generic.go:334] "Generic (PLEG): container finished" podID="cef33294-81fb-41a2-811d-2565f94514d1" containerID="2b191efabecfa6e89d563189d25950b732d83b54240d68732d9bfb22ddbb8e4f" exitCode=1 Feb 16 21:00:01.659174 master-0 kubenswrapper[7926]: I0216 21:00:01.659089 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 21:00:01.660695 master-0 kubenswrapper[7926]: I0216 21:00:01.659098 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 21:00:01.660695 master-0 kubenswrapper[7926]: I0216 21:00:01.659233 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 21:00:01.660695 master-0 kubenswrapper[7926]: I0216 21:00:01.659715 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 21:00:03.100103 master-0 kubenswrapper[7926]: I0216 21:00:03.099991 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:00:03.101000 master-0 kubenswrapper[7926]: I0216 21:00:03.100169 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:00:06.056841 master-0 kubenswrapper[7926]: E0216 21:00:06.056591 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 16 21:00:06.099980 master-0 kubenswrapper[7926]: I0216 21:00:06.099889 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:00:06.100253 master-0 kubenswrapper[7926]: I0216 21:00:06.099989 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:00:06.834937 master-0 kubenswrapper[7926]: I0216 21:00:06.834835 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 21:00:06.835225 master-0 kubenswrapper[7926]: I0216 21:00:06.834963 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 21:00:06.917603 master-0 kubenswrapper[7926]: I0216 21:00:06.917495 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 21:00:06.918024 master-0 kubenswrapper[7926]: I0216 21:00:06.917978 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 21:00:07.094769 master-0 kubenswrapper[7926]: I0216 21:00:07.094570 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 21:00:07.094769 master-0 kubenswrapper[7926]: I0216 21:00:07.094746 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 21:00:08.757506 master-0 kubenswrapper[7926]: I0216 21:00:08.757441 7926 status_manager.go:851] "Failed to get status for pod" podUID="55095f4f-cac0-456c-9ccc-45869392408c" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-samples-operator-f8cbff74c-d7lfl)" Feb 16 21:00:09.099752 master-0 kubenswrapper[7926]: I0216 21:00:09.099544 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:00:09.099752 master-0 kubenswrapper[7926]: I0216 21:00:09.099696 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:00:11.446940 master-0 kubenswrapper[7926]: E0216 21:00:11.446857 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 21:00:11.446940 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b" Netns:"/var/run/netns/060b9cce-a866-49a4-bdbd-2f72938bfca0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:11.446940 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:11.446940 master-0 kubenswrapper[7926]: > Feb 16 21:00:11.447420 master-0 kubenswrapper[7926]: E0216 21:00:11.446985 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 21:00:11.447420 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b" Netns:"/var/run/netns/060b9cce-a866-49a4-bdbd-2f72938bfca0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:11.447420 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:11.447420 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:00:11.447420 master-0 kubenswrapper[7926]: E0216 21:00:11.447023 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 21:00:11.447420 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b" Netns:"/var/run/netns/060b9cce-a866-49a4-bdbd-2f72938bfca0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:11.447420 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:11.447420 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:00:11.447420 master-0 kubenswrapper[7926]: E0216 21:00:11.447124 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-j5kwc_openshift-marketplace(ce229d27-837d-4a98-80fc-d56877ae39b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-j5kwc_openshift-marketplace(ce229d27-837d-4a98-80fc-d56877ae39b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b\\\" Netns:\\\"/var/run/netns/060b9cce-a866-49a4-bdbd-2f72938bfca0\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/community-operators-j5kwc" podUID="ce229d27-837d-4a98-80fc-d56877ae39b8" Feb 16 21:00:11.659794 master-0 kubenswrapper[7926]: I0216 21:00:11.659588 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 21:00:11.660031 master-0 kubenswrapper[7926]: I0216 21:00:11.659788 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 21:00:11.662639 master-0 kubenswrapper[7926]: I0216 21:00:11.662599 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 21:00:11.662791 master-0 kubenswrapper[7926]: I0216 21:00:11.662760 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 21:00:12.233399 master-0 kubenswrapper[7926]: I0216 21:00:12.233332 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:00:12.235292 master-0 kubenswrapper[7926]: I0216 21:00:12.235205 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:00:13.099545 master-0 kubenswrapper[7926]: I0216 21:00:13.099377 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:00:13.100187 master-0 kubenswrapper[7926]: I0216 21:00:13.099551 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:16.102916 master-0 kubenswrapper[7926]: I0216 21:00:16.102774 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:00:16.102916 master-0 kubenswrapper[7926]: I0216 21:00:16.102910 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:16.835453 master-0 kubenswrapper[7926]: I0216 21:00:16.835315 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 21:00:16.835453 master-0 kubenswrapper[7926]: I0216 21:00:16.835315 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 21:00:16.835453 master-0 kubenswrapper[7926]: I0216 21:00:16.835426 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 21:00:16.836032 master-0 kubenswrapper[7926]: I0216 21:00:16.835479 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 21:00:16.917894 master-0 kubenswrapper[7926]: I0216 21:00:16.917786 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 21:00:16.917894 master-0 kubenswrapper[7926]: I0216 21:00:16.917804 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 21:00:16.917894 master-0 kubenswrapper[7926]: I0216 21:00:16.917892 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 21:00:16.918293 master-0 kubenswrapper[7926]: I0216 21:00:16.917962 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 21:00:17.094726 master-0 kubenswrapper[7926]: I0216 21:00:17.094527 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 21:00:17.094946 master-0 kubenswrapper[7926]: I0216 21:00:17.094703 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 21:00:18.544280 master-0 kubenswrapper[7926]: E0216 21:00:18.544160 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 21:00:18.544280 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833" Netns:"/var/run/netns/12efe8d7-d340-47f0-8330-fd6898846acb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:18.544280 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:18.544280 master-0 kubenswrapper[7926]: > Feb 16 21:00:18.545587 master-0 kubenswrapper[7926]: E0216 21:00:18.544294 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 21:00:18.545587 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833" Netns:"/var/run/netns/12efe8d7-d340-47f0-8330-fd6898846acb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:18.545587 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:18.545587 master-0 kubenswrapper[7926]: > pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:00:18.545587 master-0 kubenswrapper[7926]: E0216 21:00:18.544333 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 21:00:18.545587 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833" Netns:"/var/run/netns/12efe8d7-d340-47f0-8330-fd6898846acb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:18.545587 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:18.545587 master-0 kubenswrapper[7926]: > pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:00:18.545587 master-0 kubenswrapper[7926]: E0216 21:00:18.544447 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager(319dc882-e1f5-40f9-99f4-2bae028337e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager(319dc882-e1f5-40f9-99f4-2bae028337e5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833\\\" Netns:\\\"/var/run/netns/12efe8d7-d340-47f0-8330-fd6898846acb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" Feb 16 21:00:18.780307 master-0 kubenswrapper[7926]: E0216 21:00:18.780208 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 21:00:18.780595 master-0 kubenswrapper[7926]: E0216 21:00:18.780450 7926 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Feb 16 21:00:18.789448 master-0 kubenswrapper[7926]: I0216 21:00:18.789302 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 21:00:19.100171 master-0 kubenswrapper[7926]: I0216 21:00:19.099902 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:00:19.100171 master-0 kubenswrapper[7926]: I0216 21:00:19.100029 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:19.280323 master-0 kubenswrapper[7926]: I0216 21:00:19.280234 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:00:19.281111 master-0 kubenswrapper[7926]: I0216 21:00:19.281068 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:00:20.933294 master-0 kubenswrapper[7926]: E0216 21:00:20.932625 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:00:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:00:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:00:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:00:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e2f869b1c4f98a628b2e54c1516a0d0c09c760c91e0e1a940cb76149217661b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:97930d07a108f20287bd5ceb046a5ab125604b2e3564077db9f7d7c077cc5852\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701129928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:0b4dc203ac00318362470f07842ed97dc1c724d32fa07c1613f15fcf4bf54ec8\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cc6c845176bbdca205e7c9628ea993ed70da3b2516bac35d68d9f52059fad674\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234421961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:06dcb25b4ae74ef159663cc2318f84e4665c7889b38ed62940259e5edd2b576f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a81101fb2bf3c75acf3e62bf09b19b67bccbde0faf09bd379a491f5eadb8afc1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213098166},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13\\\"],\\\"sizeBytes\\\":875178413},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471\\\"],\\\"sizeBytes\\\":552251951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\\\"],\\\"sizeBytes\\\":508404525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30\\\"],\\\"sizeBytes\\\":499489508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49\\\"],\\\"sizeBytes\\\":465648392},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb\\\"],\\\"sizeBytes\\\":465507019},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d\\\"],\\\"sizeBytes\\\":462065055}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:20.983956 master-0 kubenswrapper[7926]: E0216 21:00:20.983783 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{machine-config-operator-84976bb859-jwh5s.1894d5ac01e59545 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-operator-84976bb859-jwh5s,UID:ff193060-a272-4e4e-990a-83ac410f523d,APIVersion:v1,ResourceVersion:8196,FieldPath:spec.containers{machine-config-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:08.206361925 +0000 UTC m=+59.841262225,LastTimestamp:2026-02-16 20:58:08.206361925 +0000 UTC m=+59.841262225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:00:21.305892 master-0 kubenswrapper[7926]: I0216 21:00:21.305805 7926 generic.go:334] "Generic (PLEG): container finished" podID="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" containerID="e960726eec7f4c030bcd77b5c00f9a27240da71756776e4b20d66b6c394494f7" exitCode=0 Feb 16 21:00:21.660543 master-0 kubenswrapper[7926]: I0216 21:00:21.659132 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 21:00:21.660543 master-0 kubenswrapper[7926]: I0216 21:00:21.659261 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 21:00:22.099585 master-0 kubenswrapper[7926]: I0216 21:00:22.099392 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:00:22.099585 master-0 kubenswrapper[7926]: I0216 21:00:22.099533 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:22.457832 master-0 kubenswrapper[7926]: E0216 21:00:22.457544 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:00:25.099625 master-0 kubenswrapper[7926]: I0216 21:00:25.099505 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:00:25.099625 master-0 kubenswrapper[7926]: I0216 21:00:25.099604 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:26.835037 master-0 kubenswrapper[7926]: I0216 21:00:26.834893 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 21:00:26.836121 master-0 kubenswrapper[7926]: I0216 21:00:26.835034 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 21:00:26.918254 master-0 kubenswrapper[7926]: I0216 21:00:26.918117 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 21:00:26.918608 master-0 kubenswrapper[7926]: I0216 21:00:26.918288 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 21:00:27.094716 master-0 kubenswrapper[7926]: I0216 21:00:27.094477 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 21:00:27.094716 master-0 kubenswrapper[7926]: I0216 21:00:27.094598 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 21:00:28.099639 master-0 kubenswrapper[7926]: I0216 21:00:28.099470 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:00:28.101181 master-0 kubenswrapper[7926]: I0216 21:00:28.099640 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:30.625577 master-0 kubenswrapper[7926]: E0216 21:00:30.625501 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 21:00:30.625577 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91" Netns:"/var/run/netns/3ca6f385-5fed-4657-b678-9f83530065c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:30.625577 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:30.625577 master-0 kubenswrapper[7926]: > Feb 16 21:00:30.626058 master-0 kubenswrapper[7926]: E0216 21:00:30.625602 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 21:00:30.626058 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91" Netns:"/var/run/netns/3ca6f385-5fed-4657-b678-9f83530065c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:30.626058 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:30.626058 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:00:30.626058 master-0 kubenswrapper[7926]: E0216 21:00:30.625631 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 21:00:30.626058 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91" Netns:"/var/run/netns/3ca6f385-5fed-4657-b678-9f83530065c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:30.626058 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:30.626058 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:00:30.626058 master-0 kubenswrapper[7926]: E0216 21:00:30.625794 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-sn2nh_openshift-marketplace(f275e79f-923c-4d3a-8ed4-084a122ddcf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-sn2nh_openshift-marketplace(f275e79f-923c-4d3a-8ed4-084a122ddcf4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91\\\" Netns:\\\"/var/run/netns/3ca6f385-5fed-4657-b678-9f83530065c4\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-sn2nh" podUID="f275e79f-923c-4d3a-8ed4-084a122ddcf4" Feb 16 21:00:30.733088 master-0 kubenswrapper[7926]: E0216 21:00:30.733047 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 21:00:30.733088 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0" Netns:"/var/run/netns/576a436e-cf10-4a8d-ae28-cfcd61d89dd3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:30.733088 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:30.733088 master-0 kubenswrapper[7926]: > Feb 16 21:00:30.733258 master-0 kubenswrapper[7926]: E0216 21:00:30.733118 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 21:00:30.733258 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0" Netns:"/var/run/netns/576a436e-cf10-4a8d-ae28-cfcd61d89dd3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:30.733258 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:30.733258 master-0 kubenswrapper[7926]: > pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:00:30.733258 master-0 kubenswrapper[7926]: E0216 21:00:30.733138 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 21:00:30.733258 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0" Netns:"/var/run/netns/576a436e-cf10-4a8d-ae28-cfcd61d89dd3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:00:30.733258 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:00:30.733258 master-0 kubenswrapper[7926]: > pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:00:30.733258 master-0 kubenswrapper[7926]: E0216 21:00:30.733199 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api(ba294358-051a-4f09-b182-710d3d6778c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api(ba294358-051a-4f09-b182-710d3d6778c5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0\\\" Netns:\\\"/var/run/netns/576a436e-cf10-4a8d-ae28-cfcd61d89dd3\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" podUID="ba294358-051a-4f09-b182-710d3d6778c5" Feb 16 21:00:30.934001 master-0 kubenswrapper[7926]: E0216 21:00:30.933842 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:31.099537 master-0 kubenswrapper[7926]: I0216 21:00:31.099455 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:00:31.099791 master-0 kubenswrapper[7926]: I0216 21:00:31.099543 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:31.366521 master-0 kubenswrapper[7926]: I0216 21:00:31.366449 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:00:31.366829 master-0 kubenswrapper[7926]: I0216 21:00:31.366551 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:00:31.366918 master-0 kubenswrapper[7926]: I0216 21:00:31.366903 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:00:31.367315 master-0 kubenswrapper[7926]: I0216 21:00:31.367266 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:00:31.659935 master-0 kubenswrapper[7926]: I0216 21:00:31.659711 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 21:00:31.659935 master-0 kubenswrapper[7926]: I0216 21:00:31.659857 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 21:00:34.098970 master-0 kubenswrapper[7926]: I0216 21:00:34.098815 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:00:34.100761 master-0 kubenswrapper[7926]: I0216 21:00:34.098957 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:36.834929 master-0 kubenswrapper[7926]: I0216 21:00:36.834786 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 21:00:36.834929 master-0 kubenswrapper[7926]: I0216 21:00:36.834849 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 21:00:36.835536 master-0 kubenswrapper[7926]: I0216 21:00:36.835215 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 21:00:36.835536 master-0 kubenswrapper[7926]: I0216 21:00:36.835291 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 21:00:36.916474 master-0 kubenswrapper[7926]: I0216 21:00:36.916405 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 21:00:36.916725 master-0 kubenswrapper[7926]: I0216 21:00:36.916459 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 21:00:36.916725 master-0 kubenswrapper[7926]: I0216 21:00:36.916482 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 21:00:36.916725 master-0 kubenswrapper[7926]: I0216 21:00:36.916529 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 21:00:37.094277 master-0 kubenswrapper[7926]: I0216 21:00:37.094124 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 21:00:37.094277 master-0 kubenswrapper[7926]: I0216 21:00:37.094181 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 21:00:37.099595 master-0 kubenswrapper[7926]: I0216 21:00:37.099526 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:00:37.099701 master-0 kubenswrapper[7926]: I0216 21:00:37.099636 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:39.459247 master-0 kubenswrapper[7926]: E0216 21:00:39.459142 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:00:40.098451 master-0 kubenswrapper[7926]: I0216 21:00:40.098332 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:00:40.098847 master-0 kubenswrapper[7926]: I0216 21:00:40.098464 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:40.934276 master-0 kubenswrapper[7926]: E0216 21:00:40.934161 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:41.659624 master-0 kubenswrapper[7926]: I0216 21:00:41.659520 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 21:00:41.660859 master-0 kubenswrapper[7926]: I0216 21:00:41.659619 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 21:00:42.098667 master-0 kubenswrapper[7926]: I0216 21:00:42.098577 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:00:42.099392 master-0 kubenswrapper[7926]: I0216 21:00:42.098741 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:00:42.430737 master-0 kubenswrapper[7926]: I0216 21:00:42.430584 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-8pqbl_302156cc-9dca-4a66-9e6a-ba2c7e738c92/control-plane-machine-set-operator/0.log" Feb 16 21:00:42.430737 master-0 kubenswrapper[7926]: I0216 21:00:42.430708 7926 generic.go:334] "Generic (PLEG): container finished" podID="302156cc-9dca-4a66-9e6a-ba2c7e738c92" containerID="03d8daaa264d52b607ef3a2e1ee4da18d94e4e7433715288335ef0a92bd90db1" exitCode=1 Feb 16 21:00:42.433753 master-0 kubenswrapper[7926]: I0216 21:00:42.433720 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/2.log" Feb 16 21:00:42.434963 master-0 kubenswrapper[7926]: I0216 21:00:42.434928 7926 generic.go:334] "Generic (PLEG): container finished" podID="59237aa6-6250-4619-8ee5-abae59f04b57" containerID="b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78" exitCode=255 Feb 16 21:00:44.450572 master-0 kubenswrapper[7926]: I0216 21:00:44.450468 7926 generic.go:334] "Generic (PLEG): container finished" podID="484154d0-66c8-4d0e-bf1b-f48d0abfe628" containerID="fd75cc94a5c6af861419130cf9adb9c00eea8b412cbb5bebb25e798a841c1376" exitCode=0 Feb 16 21:00:45.098640 master-0 kubenswrapper[7926]: I0216 21:00:45.098563 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:00:45.098997 master-0 kubenswrapper[7926]: I0216 21:00:45.098681 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:00:45.456285 master-0 kubenswrapper[7926]: I0216 21:00:45.456149 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/1.log" Feb 16 21:00:45.457159 master-0 kubenswrapper[7926]: I0216 21:00:45.457123 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/0.log" Feb 16 21:00:45.457220 master-0 kubenswrapper[7926]: I0216 21:00:45.457191 7926 generic.go:334] "Generic (PLEG): container finished" podID="8b648d9e-a892-4951-b0e2-fed6b16273d4" containerID="ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9" exitCode=1 Feb 16 21:00:46.493405 master-0 kubenswrapper[7926]: I0216 21:00:46.493302 7926 patch_prober.go:28] interesting pod/etcd-operator-67bf55ccdd-8cllz container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Feb 16 21:00:46.493405 master-0 kubenswrapper[7926]: I0216 21:00:46.493378 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" podUID="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Feb 16 21:00:46.834477 master-0 kubenswrapper[7926]: I0216 21:00:46.834376 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 21:00:46.834844 master-0 kubenswrapper[7926]: I0216 21:00:46.834492 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 21:00:46.918212 master-0 kubenswrapper[7926]: I0216 21:00:46.918081 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 21:00:46.918480 master-0 kubenswrapper[7926]: I0216 21:00:46.918220 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 21:00:47.095002 master-0 kubenswrapper[7926]: I0216 21:00:47.094799 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 21:00:47.095002 master-0 kubenswrapper[7926]: I0216 21:00:47.094893 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 21:00:47.473165 master-0 kubenswrapper[7926]: I0216 21:00:47.473023 7926 generic.go:334] "Generic (PLEG): container finished" podID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerID="03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3" exitCode=0 Feb 16 21:00:48.099056 master-0 kubenswrapper[7926]: I0216 21:00:48.098941 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:00:48.099918 master-0 kubenswrapper[7926]: I0216 21:00:48.099047 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:00:48.483159 master-0 kubenswrapper[7926]: I0216 21:00:48.482984 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-6c46d95f74-2nz2q_c62bb2b4-1469-4e0d-810f-cd6e21ee908a/machine-approver-controller/0.log" Feb 16 21:00:48.483951 master-0 kubenswrapper[7926]: I0216 21:00:48.483850 7926 generic.go:334] "Generic (PLEG): container finished" podID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerID="f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4" exitCode=255 Feb 16 21:00:50.934826 master-0 kubenswrapper[7926]: E0216 21:00:50.934675 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:00:51.098792 master-0 kubenswrapper[7926]: I0216 21:00:51.098624 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:00:51.098792 master-0 kubenswrapper[7926]: I0216 21:00:51.098748 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:00:51.659565 master-0 kubenswrapper[7926]: I0216 21:00:51.659481 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 21:00:51.659565 master-0 kubenswrapper[7926]: I0216 21:00:51.659554 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 21:00:52.793346 master-0 kubenswrapper[7926]: E0216 21:00:52.793255 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 21:00:52.794205 master-0 kubenswrapper[7926]: E0216 21:00:52.793521 7926 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.013s" Feb 16 21:00:52.794205 master-0 kubenswrapper[7926]: I0216 21:00:52.793603 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 21:00:52.794205 master-0 kubenswrapper[7926]: I0216 21:00:52.793642 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 21:00:52.794205 master-0 kubenswrapper[7926]: I0216 21:00:52.793706 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 21:00:52.794205 master-0 kubenswrapper[7926]: I0216 21:00:52.793754 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 21:00:52.811455 master-0 kubenswrapper[7926]: I0216 21:00:52.811352 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 21:00:53.732490 master-0 kubenswrapper[7926]: I0216 21:00:53.732429 7926 patch_prober.go:28] interesting pod/controller-manager-7c6548b89f-s8dv7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Feb 16 21:00:53.732773 master-0 kubenswrapper[7926]: I0216 21:00:53.732499 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Feb 16 21:00:53.732773 master-0 kubenswrapper[7926]: I0216 21:00:53.732574 7926 patch_prober.go:28] interesting pod/controller-manager-7c6548b89f-s8dv7 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Feb 16 21:00:53.732773 master-0 kubenswrapper[7926]: I0216 21:00:53.732609 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Feb 16 21:00:54.099483 master-0 kubenswrapper[7926]: I0216 21:00:54.099353 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:00:54.099483 master-0 kubenswrapper[7926]: I0216 21:00:54.099479 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:00:54.987254 master-0 kubenswrapper[7926]: E0216 21:00:54.987050 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cluster-samples-operator-f8cbff74c-d7lfl.1894d5adf4847e3b openshift-cluster-samples-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-cluster-samples-operator,Name:cluster-samples-operator-f8cbff74c-d7lfl,UID:55095f4f-cac0-456c-9ccc-45869392408c,APIVersion:v1,ResourceVersion:7781,FieldPath:spec.containers{cluster-samples-operator},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9\" in 15.917s (15.917s including waiting). Image size: 450350026 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:16.571829819 +0000 UTC m=+68.206730119,LastTimestamp:2026-02-16 20:58:16.571829819 +0000 UTC m=+68.206730119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:00:56.460916 master-0 kubenswrapper[7926]: E0216 21:00:56.460747 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:00:56.834529 master-0 kubenswrapper[7926]: I0216 21:00:56.834411 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 21:00:56.834773 master-0 kubenswrapper[7926]: I0216 21:00:56.834548 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 21:00:56.916834 master-0 kubenswrapper[7926]: I0216 21:00:56.916764 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 21:00:56.917047 master-0 kubenswrapper[7926]: I0216 21:00:56.916843 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 21:00:57.094709 master-0 kubenswrapper[7926]: I0216 21:00:57.094574 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 21:00:57.094870 master-0 kubenswrapper[7926]: I0216 21:00:57.094692 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 21:00:57.099993 master-0 kubenswrapper[7926]: I0216 21:00:57.099776 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:00:57.099993 master-0 kubenswrapper[7926]: I0216 21:00:57.099873 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:01:00.099500 master-0 kubenswrapper[7926]: I0216 21:01:00.099389 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:01:00.100756 master-0 kubenswrapper[7926]: I0216 21:01:00.099508 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:01:00.935879 master-0 kubenswrapper[7926]: E0216 21:01:00.935712 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:01:00.935879 master-0 kubenswrapper[7926]: E0216 21:01:00.935760 7926 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:01:01.659608 master-0 kubenswrapper[7926]: I0216 21:01:01.659561 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 21:01:01.660743 master-0 kubenswrapper[7926]: I0216 21:01:01.660708 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 21:01:03.098539 master-0 kubenswrapper[7926]: I0216 21:01:03.098436 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:01:03.099136 master-0 kubenswrapper[7926]: I0216 21:01:03.098593 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:01:03.732198 master-0 kubenswrapper[7926]: I0216 21:01:03.732076 7926 patch_prober.go:28] interesting pod/controller-manager-7c6548b89f-s8dv7 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Feb 16 21:01:03.732198 master-0 kubenswrapper[7926]: I0216 21:01:03.732144 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Feb 16 21:01:03.732198 master-0 kubenswrapper[7926]: I0216 21:01:03.732152 7926 patch_prober.go:28] interesting pod/controller-manager-7c6548b89f-s8dv7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Feb 16 21:01:03.732922 master-0 kubenswrapper[7926]: I0216 21:01:03.732286 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Feb 16 21:01:06.099305 master-0 kubenswrapper[7926]: I0216 21:01:06.099211 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:01:06.099858 master-0 kubenswrapper[7926]: I0216 21:01:06.099298 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:01:06.834678 master-0 kubenswrapper[7926]: I0216 21:01:06.834574 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 21:01:06.835141 master-0 kubenswrapper[7926]: I0216 21:01:06.834700 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 21:01:06.917063 master-0 kubenswrapper[7926]: I0216 21:01:06.916923 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 21:01:06.917063 master-0 kubenswrapper[7926]: I0216 21:01:06.917016 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 21:01:07.094279 master-0 kubenswrapper[7926]: I0216 21:01:07.094108 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 21:01:07.094279 master-0 kubenswrapper[7926]: I0216 21:01:07.094191 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 21:01:08.759865 master-0 kubenswrapper[7926]: I0216 21:01:08.759758 7926 status_manager.go:851] "Failed to get status for pod" podUID="ff193060-a272-4e4e-990a-83ac410f523d" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods machine-config-operator-84976bb859-jwh5s)" Feb 16 21:01:09.099359 master-0 kubenswrapper[7926]: I0216 21:01:09.099247 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:01:09.099628 master-0 kubenswrapper[7926]: I0216 21:01:09.099598 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:01:11.659295 master-0 kubenswrapper[7926]: I0216 21:01:11.659200 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 21:01:11.659295 master-0 kubenswrapper[7926]: I0216 21:01:11.659284 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 21:01:12.099008 master-0 kubenswrapper[7926]: I0216 21:01:12.098912 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:01:12.099008 master-0 kubenswrapper[7926]: I0216 21:01:12.099002 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:01:12.959799 master-0 kubenswrapper[7926]: E0216 21:01:12.959750 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 21:01:12.959799 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12" Netns:"/var/run/netns/070fdb23-dd10-4ba3-906b-5e8108bea483" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:12.959799 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:12.959799 master-0 kubenswrapper[7926]: > Feb 16 21:01:12.960291 master-0 kubenswrapper[7926]: E0216 21:01:12.960274 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 21:01:12.960291 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12" Netns:"/var/run/netns/070fdb23-dd10-4ba3-906b-5e8108bea483" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:12.960291 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:12.960291 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:01:12.960440 master-0 kubenswrapper[7926]: E0216 21:01:12.960424 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 21:01:12.960440 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12" Netns:"/var/run/netns/070fdb23-dd10-4ba3-906b-5e8108bea483" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:12.960440 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:12.960440 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:01:12.960667 master-0 kubenswrapper[7926]: E0216 21:01:12.960614 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-j5kwc_openshift-marketplace(ce229d27-837d-4a98-80fc-d56877ae39b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-j5kwc_openshift-marketplace(ce229d27-837d-4a98-80fc-d56877ae39b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12\\\" Netns:\\\"/var/run/netns/070fdb23-dd10-4ba3-906b-5e8108bea483\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/community-operators-j5kwc" podUID="ce229d27-837d-4a98-80fc-d56877ae39b8" Feb 16 21:01:13.462335 master-0 kubenswrapper[7926]: E0216 21:01:13.461960 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:01:13.732771 master-0 kubenswrapper[7926]: I0216 21:01:13.732578 7926 patch_prober.go:28] interesting pod/controller-manager-7c6548b89f-s8dv7 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Feb 16 21:01:13.732771 master-0 kubenswrapper[7926]: I0216 21:01:13.732724 7926 patch_prober.go:28] interesting pod/controller-manager-7c6548b89f-s8dv7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Feb 16 21:01:13.733048 master-0 kubenswrapper[7926]: I0216 21:01:13.732724 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Feb 16 21:01:13.733048 master-0 kubenswrapper[7926]: I0216 21:01:13.732822 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Feb 16 21:01:15.099226 master-0 kubenswrapper[7926]: I0216 21:01:15.099146 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:01:15.100255 master-0 kubenswrapper[7926]: I0216 21:01:15.099847 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:01:15.647255 master-0 kubenswrapper[7926]: I0216 21:01:15.647135 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/1.log" Feb 16 21:01:15.647875 master-0 kubenswrapper[7926]: I0216 21:01:15.647819 7926 generic.go:334] "Generic (PLEG): container finished" podID="0b02b740-5698-4e9a-90fe-2873bd0b0958" containerID="796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09" exitCode=255 Feb 16 21:01:15.650094 master-0 kubenswrapper[7926]: I0216 21:01:15.650032 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/2.log" Feb 16 21:01:15.650687 master-0 kubenswrapper[7926]: I0216 21:01:15.650596 7926 generic.go:334] "Generic (PLEG): container finished" podID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" containerID="b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc" exitCode=255 Feb 16 21:01:15.653136 master-0 kubenswrapper[7926]: I0216 21:01:15.653085 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/2.log" Feb 16 21:01:15.653818 master-0 kubenswrapper[7926]: I0216 21:01:15.653756 7926 generic.go:334] "Generic (PLEG): container finished" podID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerID="4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221" exitCode=255 Feb 16 21:01:15.655983 master-0 kubenswrapper[7926]: I0216 21:01:15.655932 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/2.log" Feb 16 21:01:15.656731 master-0 kubenswrapper[7926]: I0216 21:01:15.656683 7926 generic.go:334] "Generic (PLEG): container finished" podID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" containerID="47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac" exitCode=255 Feb 16 21:01:16.667383 master-0 kubenswrapper[7926]: I0216 21:01:16.667235 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/1.log" Feb 16 21:01:16.668704 master-0 kubenswrapper[7926]: I0216 21:01:16.668623 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/0.log" Feb 16 21:01:16.668816 master-0 kubenswrapper[7926]: I0216 21:01:16.668730 7926 generic.go:334] "Generic (PLEG): container finished" podID="1b61063e-775e-421d-bf73-a6ef134293a0" containerID="335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9" exitCode=255 Feb 16 21:01:16.834809 master-0 kubenswrapper[7926]: I0216 21:01:16.834624 7926 patch_prober.go:28] interesting pod/catalogd-controller-manager-67bc7c997f-8kdgg container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Feb 16 21:01:16.834809 master-0 kubenswrapper[7926]: I0216 21:01:16.834795 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" podUID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Feb 16 21:01:16.917726 master-0 kubenswrapper[7926]: I0216 21:01:16.917387 7926 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-qzs2g container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Feb 16 21:01:16.917726 master-0 kubenswrapper[7926]: I0216 21:01:16.917505 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" podUID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Feb 16 21:01:17.094424 master-0 kubenswrapper[7926]: I0216 21:01:17.094289 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 21:01:17.094858 master-0 kubenswrapper[7926]: I0216 21:01:17.094416 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 21:01:18.098577 master-0 kubenswrapper[7926]: I0216 21:01:18.098394 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:01:18.098577 master-0 kubenswrapper[7926]: I0216 21:01:18.098479 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:01:20.056525 master-0 kubenswrapper[7926]: E0216 21:01:20.056454 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 21:01:20.056525 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce" Netns:"/var/run/netns/c324e400-c9f8-42d7-92d7-2dc198b86bea" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:20.056525 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:20.056525 master-0 kubenswrapper[7926]: > Feb 16 21:01:20.057374 master-0 kubenswrapper[7926]: E0216 21:01:20.056553 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 21:01:20.057374 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce" Netns:"/var/run/netns/c324e400-c9f8-42d7-92d7-2dc198b86bea" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:20.057374 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:20.057374 master-0 kubenswrapper[7926]: > pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:01:20.057374 master-0 kubenswrapper[7926]: E0216 21:01:20.056578 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 21:01:20.057374 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce" Netns:"/var/run/netns/c324e400-c9f8-42d7-92d7-2dc198b86bea" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:20.057374 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:20.057374 master-0 kubenswrapper[7926]: > pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:01:20.057374 master-0 kubenswrapper[7926]: E0216 21:01:20.056671 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager(319dc882-e1f5-40f9-99f4-2bae028337e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager(319dc882-e1f5-40f9-99f4-2bae028337e5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce\\\" Netns:\\\"/var/run/netns/c324e400-c9f8-42d7-92d7-2dc198b86bea\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" Feb 16 21:01:20.948672 master-0 kubenswrapper[7926]: E0216 21:01:20.948116 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:01:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:01:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:01:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:01:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e2f869b1c4f98a628b2e54c1516a0d0c09c760c91e0e1a940cb76149217661b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:97930d07a108f20287bd5ceb046a5ab125604b2e3564077db9f7d7c077cc5852\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701129928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:0b4dc203ac00318362470f07842ed97dc1c724d32fa07c1613f15fcf4bf54ec8\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cc6c845176bbdca205e7c9628ea993ed70da3b2516bac35d68d9f52059fad674\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234421961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:06dcb25b4ae74ef159663cc2318f84e4665c7889b38ed62940259e5edd2b576f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a81101fb2bf3c75acf3e62bf09b19b67bccbde0faf09bd379a491f5eadb8afc1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213098166},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13\\\"],\\\"sizeBytes\\\":875178413},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471\\\"],\\\"sizeBytes\\\":552251951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\\\"],\\\"sizeBytes\\\":508404525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30\\\"],\\\"sizeBytes\\\":499489508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49\\\"],\\\"sizeBytes\\\":465648392},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb\\\"],\\\"sizeBytes\\\":465507019},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d\\\"],\\\"sizeBytes\\\":462065055}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:01:21.099165 master-0 kubenswrapper[7926]: I0216 21:01:21.099072 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:01:21.100043 master-0 kubenswrapper[7926]: I0216 21:01:21.099166 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:01:21.660300 master-0 kubenswrapper[7926]: I0216 21:01:21.660108 7926 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-6rmhq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" start-of-body= Feb 16 21:01:21.661782 master-0 kubenswrapper[7926]: I0216 21:01:21.661717 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" podUID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.5:8080/healthz\": dial tcp 10.128.0.5:8080: connect: connection refused" Feb 16 21:01:23.732196 master-0 kubenswrapper[7926]: I0216 21:01:23.732109 7926 patch_prober.go:28] interesting pod/controller-manager-7c6548b89f-s8dv7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Feb 16 21:01:23.732196 master-0 kubenswrapper[7926]: I0216 21:01:23.732189 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Feb 16 21:01:24.098616 master-0 kubenswrapper[7926]: I0216 21:01:24.098525 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:01:24.098897 master-0 kubenswrapper[7926]: I0216 21:01:24.098623 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:01:26.814982 master-0 kubenswrapper[7926]: E0216 21:01:26.814869 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 21:01:26.816067 master-0 kubenswrapper[7926]: E0216 21:01:26.815102 7926 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.021s" Feb 16 21:01:26.816067 master-0 kubenswrapper[7926]: I0216 21:01:26.815126 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"b09d3c16-18e3-45b3-9d39-949d2464b300","Type":"ContainerDied","Data":"ab3f1bdaa87534b4aa1ea4a058dea3457c695cfe1da23ed41ae2ee089315bd08"} Feb 16 21:01:26.816067 master-0 kubenswrapper[7926]: I0216 21:01:26.815179 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:01:26.816067 master-0 kubenswrapper[7926]: I0216 21:01:26.815211 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 21:01:26.816067 master-0 kubenswrapper[7926]: I0216 21:01:26.815238 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:01:26.816067 master-0 kubenswrapper[7926]: I0216 21:01:26.815249 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:01:26.816067 master-0 kubenswrapper[7926]: I0216 21:01:26.815703 7926 scope.go:117] "RemoveContainer" containerID="1fdce62d33ee01800252ab5e608745339a8f0dbc0ccac60559c706daa3409f0f" Feb 16 21:01:26.816067 master-0 kubenswrapper[7926]: I0216 21:01:26.815999 7926 scope.go:117] "RemoveContainer" containerID="b1ac78292de0a544c15af274111c4e933c90f41d601dad32fc19d3dacdb54345" Feb 16 21:01:26.817467 master-0 kubenswrapper[7926]: I0216 21:01:26.816635 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8vtc" event={"ID":"03593410-baa5-4edb-9d73-242a74f82987","Type":"ContainerStarted","Data":"8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d"} Feb 16 21:01:26.817467 master-0 kubenswrapper[7926]: I0216 21:01:26.816954 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhh2p" event={"ID":"9566b108-44e1-4d9e-8984-4c396dc4408c","Type":"ContainerStarted","Data":"17cb30ab353a8c5e6ca279c7628b3d05fccf6b6666e6fe10a816ce650b15966b"} Feb 16 21:01:26.818000 master-0 kubenswrapper[7926]: I0216 21:01:26.817770 7926 scope.go:117] "RemoveContainer" containerID="31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a" Feb 16 21:01:26.818972 master-0 kubenswrapper[7926]: I0216 21:01:26.818937 7926 scope.go:117] "RemoveContainer" containerID="e960726eec7f4c030bcd77b5c00f9a27240da71756776e4b20d66b6c394494f7" Feb 16 21:01:26.827674 master-0 kubenswrapper[7926]: I0216 21:01:26.827605 7926 scope.go:117] "RemoveContainer" containerID="ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9" Feb 16 21:01:26.827801 master-0 kubenswrapper[7926]: I0216 21:01:26.827725 7926 scope.go:117] "RemoveContainer" containerID="f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4" Feb 16 21:01:26.828464 master-0 kubenswrapper[7926]: I0216 21:01:26.828431 7926 scope.go:117] "RemoveContainer" containerID="a339e5c4723737e030c5a03c8395cedd263d3d5213cb12208bfe3004bbd0ef5e" Feb 16 21:01:26.828883 master-0 kubenswrapper[7926]: I0216 21:01:26.828842 7926 scope.go:117] "RemoveContainer" containerID="03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3" Feb 16 21:01:26.829074 master-0 kubenswrapper[7926]: I0216 21:01:26.829015 7926 scope.go:117] "RemoveContainer" containerID="b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc" Feb 16 21:01:26.830767 master-0 kubenswrapper[7926]: I0216 21:01:26.829696 7926 scope.go:117] "RemoveContainer" containerID="a76963335874f22d97778041d73ee6a0a7e3ffd325f9fb8a457626be3c8e5238" Feb 16 21:01:26.830767 master-0 kubenswrapper[7926]: I0216 21:01:26.829812 7926 scope.go:117] "RemoveContainer" containerID="34f0b2189e90cc7801c4026c4ab900cc1fc9f5ac2f006e83f5fec81671df191f" Feb 16 21:01:26.830767 master-0 kubenswrapper[7926]: I0216 21:01:26.830029 7926 scope.go:117] "RemoveContainer" containerID="dd23c2441236e3bdedd04adcd70f26ba2f2b37ed96fb0998ec94c3bbdca5b7da" Feb 16 21:01:26.830767 master-0 kubenswrapper[7926]: I0216 21:01:26.830739 7926 scope.go:117] "RemoveContainer" containerID="c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a" Feb 16 21:01:26.833546 master-0 kubenswrapper[7926]: I0216 21:01:26.831331 7926 scope.go:117] "RemoveContainer" containerID="796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09" Feb 16 21:01:26.833546 master-0 kubenswrapper[7926]: I0216 21:01:26.832493 7926 scope.go:117] "RemoveContainer" containerID="6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c" Feb 16 21:01:26.833546 master-0 kubenswrapper[7926]: I0216 21:01:26.833133 7926 scope.go:117] "RemoveContainer" containerID="b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78" Feb 16 21:01:26.833546 master-0 kubenswrapper[7926]: I0216 21:01:26.833341 7926 scope.go:117] "RemoveContainer" containerID="03d8daaa264d52b607ef3a2e1ee4da18d94e4e7433715288335ef0a92bd90db1" Feb 16 21:01:26.833546 master-0 kubenswrapper[7926]: I0216 21:01:26.833476 7926 scope.go:117] "RemoveContainer" containerID="47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac" Feb 16 21:01:26.835405 master-0 kubenswrapper[7926]: I0216 21:01:26.834112 7926 scope.go:117] "RemoveContainer" containerID="2b191efabecfa6e89d563189d25950b732d83b54240d68732d9bfb22ddbb8e4f" Feb 16 21:01:26.835405 master-0 kubenswrapper[7926]: I0216 21:01:26.834965 7926 scope.go:117] "RemoveContainer" containerID="75d7b146641140c312956826b413c80f7862cac93292ebbdd2b6b13f8e1b06a3" Feb 16 21:01:26.835405 master-0 kubenswrapper[7926]: I0216 21:01:26.835162 7926 scope.go:117] "RemoveContainer" containerID="fd75cc94a5c6af861419130cf9adb9c00eea8b412cbb5bebb25e798a841c1376" Feb 16 21:01:26.836331 master-0 kubenswrapper[7926]: I0216 21:01:26.835661 7926 scope.go:117] "RemoveContainer" containerID="da2d8128d877c8e59ec552f44d9719195718721aa40536dc7418200005684242" Feb 16 21:01:26.838108 master-0 kubenswrapper[7926]: I0216 21:01:26.838084 7926 scope.go:117] "RemoveContainer" containerID="335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9" Feb 16 21:01:26.838579 master-0 kubenswrapper[7926]: I0216 21:01:26.838558 7926 scope.go:117] "RemoveContainer" containerID="0e76905998b63e1ca06bb636f257a337f36ba01b7d03a406ab7d6fa3bdb3b545" Feb 16 21:01:26.839197 master-0 kubenswrapper[7926]: I0216 21:01:26.839118 7926 scope.go:117] "RemoveContainer" containerID="4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221" Feb 16 21:01:26.839623 master-0 kubenswrapper[7926]: I0216 21:01:26.839593 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 21:01:27.747707 master-0 kubenswrapper[7926]: I0216 21:01:27.747636 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/1.log" Feb 16 21:01:27.750978 master-0 kubenswrapper[7926]: I0216 21:01:27.750811 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/0.log" Feb 16 21:01:27.753287 master-0 kubenswrapper[7926]: I0216 21:01:27.753247 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-8kdgg_e8194cdc-3133-49e2-9579-a747c0bf2b16/manager/0.log" Feb 16 21:01:27.755708 master-0 kubenswrapper[7926]: I0216 21:01:27.755687 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/2.log" Feb 16 21:01:27.759091 master-0 kubenswrapper[7926]: I0216 21:01:27.759008 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-8pqbl_302156cc-9dca-4a66-9e6a-ba2c7e738c92/control-plane-machine-set-operator/0.log" Feb 16 21:01:27.775719 master-0 kubenswrapper[7926]: I0216 21:01:27.775674 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/1.log" Feb 16 21:01:27.776242 master-0 kubenswrapper[7926]: I0216 21:01:27.776215 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/0.log" Feb 16 21:01:27.777944 master-0 kubenswrapper[7926]: I0216 21:01:27.777914 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/0.log" Feb 16 21:01:27.780564 master-0 kubenswrapper[7926]: I0216 21:01:27.780531 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-qzs2g_1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/manager/0.log" Feb 16 21:01:27.782353 master-0 kubenswrapper[7926]: I0216 21:01:27.782316 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/2.log" Feb 16 21:01:27.784404 master-0 kubenswrapper[7926]: I0216 21:01:27.784343 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/1.log" Feb 16 21:01:27.785137 master-0 kubenswrapper[7926]: I0216 21:01:27.785108 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/0.log" Feb 16 21:01:27.786993 master-0 kubenswrapper[7926]: I0216 21:01:27.786973 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/2.log" Feb 16 21:01:27.788994 master-0 kubenswrapper[7926]: I0216 21:01:27.788962 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-6c46d95f74-2nz2q_c62bb2b4-1469-4e0d-810f-cd6e21ee908a/machine-approver-controller/0.log" Feb 16 21:01:27.791086 master-0 kubenswrapper[7926]: I0216 21:01:27.791063 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/2.log" Feb 16 21:01:27.791879 master-0 kubenswrapper[7926]: I0216 21:01:27.791863 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/1.log" Feb 16 21:01:27.795920 master-0 kubenswrapper[7926]: I0216 21:01:27.795886 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/0.log" Feb 16 21:01:27.797417 master-0 kubenswrapper[7926]: I0216 21:01:27.797394 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/2.log" Feb 16 21:01:27.799443 master-0 kubenswrapper[7926]: I0216 21:01:27.799419 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/0.log" Feb 16 21:01:28.991385 master-0 kubenswrapper[7926]: E0216 21:01:28.991138 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{route-controller-manager-749ccd9c56-wzsnf.1894d5adf6fb0cda openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-749ccd9c56-wzsnf,UID:4db59450-da78-4879-ada8-ca3fc49fb7a7,APIVersion:v1,ResourceVersion:7671,FieldPath:spec.containers{route-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\" in 17.81s (17.81s including waiting). Image size: 481921522 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:16.61315401 +0000 UTC m=+68.248054310,LastTimestamp:2026-02-16 20:58:16.61315401 +0000 UTC m=+68.248054310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:01:30.464310 master-0 kubenswrapper[7926]: E0216 21:01:30.464208 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Feb 16 21:01:30.949198 master-0 kubenswrapper[7926]: E0216 21:01:30.949111 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:01:32.143162 master-0 kubenswrapper[7926]: E0216 21:01:32.143091 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 21:01:32.143162 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df" Netns:"/var/run/netns/04db2c0b-db75-4b54-aa5b-d772d9084ede" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:32.143162 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:32.143162 master-0 kubenswrapper[7926]: > Feb 16 21:01:32.143894 master-0 kubenswrapper[7926]: E0216 21:01:32.143180 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 21:01:32.143894 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df" Netns:"/var/run/netns/04db2c0b-db75-4b54-aa5b-d772d9084ede" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:32.143894 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:32.143894 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:01:32.143894 master-0 kubenswrapper[7926]: E0216 21:01:32.143215 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 21:01:32.143894 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df" Netns:"/var/run/netns/04db2c0b-db75-4b54-aa5b-d772d9084ede" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:32.143894 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:32.143894 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:01:32.143894 master-0 kubenswrapper[7926]: E0216 21:01:32.143294 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-sn2nh_openshift-marketplace(f275e79f-923c-4d3a-8ed4-084a122ddcf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-sn2nh_openshift-marketplace(f275e79f-923c-4d3a-8ed4-084a122ddcf4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df\\\" Netns:\\\"/var/run/netns/04db2c0b-db75-4b54-aa5b-d772d9084ede\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-sn2nh" podUID="f275e79f-923c-4d3a-8ed4-084a122ddcf4" Feb 16 21:01:32.158027 master-0 kubenswrapper[7926]: E0216 21:01:32.157945 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 21:01:32.158027 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1" Netns:"/var/run/netns/d4bdedfa-6587-46e6-a26e-14849ab87001" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:32.158027 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:32.158027 master-0 kubenswrapper[7926]: > Feb 16 21:01:32.158405 master-0 kubenswrapper[7926]: E0216 21:01:32.158043 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 21:01:32.158405 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1" Netns:"/var/run/netns/d4bdedfa-6587-46e6-a26e-14849ab87001" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:32.158405 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:32.158405 master-0 kubenswrapper[7926]: > pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:01:32.158405 master-0 kubenswrapper[7926]: E0216 21:01:32.158065 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 21:01:32.158405 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1" Netns:"/var/run/netns/d4bdedfa-6587-46e6-a26e-14849ab87001" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:01:32.158405 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:01:32.158405 master-0 kubenswrapper[7926]: > pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:01:32.158405 master-0 kubenswrapper[7926]: E0216 21:01:32.158135 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api(ba294358-051a-4f09-b182-710d3d6778c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api(ba294358-051a-4f09-b182-710d3d6778c5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1\\\" Netns:\\\"/var/run/netns/d4bdedfa-6587-46e6-a26e-14849ab87001\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" podUID="ba294358-051a-4f09-b182-710d3d6778c5" Feb 16 21:01:32.853557 master-0 kubenswrapper[7926]: I0216 21:01:32.853420 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:01:32.854157 master-0 kubenswrapper[7926]: I0216 21:01:32.853430 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:01:32.854157 master-0 kubenswrapper[7926]: I0216 21:01:32.853957 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:01:32.855109 master-0 kubenswrapper[7926]: I0216 21:01:32.854398 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:01:39.711083 master-0 kubenswrapper[7926]: I0216 21:01:39.710541 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:01:39.838242 master-0 kubenswrapper[7926]: E0216 21:01:39.838165 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 21:01:40.905709 master-0 kubenswrapper[7926]: I0216 21:01:40.905633 7926 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0" exitCode=0 Feb 16 21:01:40.950326 master-0 kubenswrapper[7926]: E0216 21:01:40.950231 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:01:47.466535 master-0 kubenswrapper[7926]: E0216 21:01:47.466074 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:01:49.710410 master-0 kubenswrapper[7926]: I0216 21:01:49.710249 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:01:50.951120 master-0 kubenswrapper[7926]: E0216 21:01:50.951022 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:01:58.026755 master-0 kubenswrapper[7926]: I0216 21:01:58.026706 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/1.log" Feb 16 21:01:58.027437 master-0 kubenswrapper[7926]: I0216 21:01:58.027306 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/0.log" Feb 16 21:01:58.027437 master-0 kubenswrapper[7926]: I0216 21:01:58.027359 7926 generic.go:334] "Generic (PLEG): container finished" podID="b1ac9776-54c4-46ce-b898-01c8cf35e593" containerID="0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336" exitCode=1 Feb 16 21:01:58.029332 master-0 kubenswrapper[7926]: I0216 21:01:58.029301 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/1.log" Feb 16 21:01:58.029852 master-0 kubenswrapper[7926]: I0216 21:01:58.029829 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/0.log" Feb 16 21:01:58.029921 master-0 kubenswrapper[7926]: I0216 21:01:58.029864 7926 generic.go:334] "Generic (PLEG): container finished" podID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerID="d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93" exitCode=255 Feb 16 21:01:59.711133 master-0 kubenswrapper[7926]: I0216 21:01:59.711019 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:00.843265 master-0 kubenswrapper[7926]: E0216 21:02:00.843162 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 21:02:00.844111 master-0 kubenswrapper[7926]: E0216 21:02:00.843388 7926 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.026s" Feb 16 21:02:00.844111 master-0 kubenswrapper[7926]: I0216 21:02:00.843422 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:02:00.852426 master-0 kubenswrapper[7926]: I0216 21:02:00.852359 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 21:02:00.952149 master-0 kubenswrapper[7926]: E0216 21:02:00.952097 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:00.952149 master-0 kubenswrapper[7926]: E0216 21:02:00.952146 7926 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:02:02.994988 master-0 kubenswrapper[7926]: E0216 21:02:02.994809 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d5aebe09a251 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:19.952775761 +0000 UTC m=+71.587676101,LastTimestamp:2026-02-16 20:58:19.952775761 +0000 UTC m=+71.587676101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:02:04.468153 master-0 kubenswrapper[7926]: E0216 21:02:04.468038 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:02:08.753191 master-0 kubenswrapper[7926]: I0216 21:02:08.753084 7926 kubelet.go:1505] "Image garbage collection succeeded" Feb 16 21:02:08.761280 master-0 kubenswrapper[7926]: I0216 21:02:08.761207 7926 status_manager.go:851] "Failed to get status for pod" podUID="aa2e9bbc-3962-45f5-a7cc-2dc059409e70" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-storage-operator-75b869db96-g4w5m)" Feb 16 21:02:20.971046 master-0 kubenswrapper[7926]: E0216 21:02:20.970762 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:02:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:02:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:02:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:02:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e2f869b1c4f98a628b2e54c1516a0d0c09c760c91e0e1a940cb76149217661b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:97930d07a108f20287bd5ceb046a5ab125604b2e3564077db9f7d7c077cc5852\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701129928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:0b4dc203ac00318362470f07842ed97dc1c724d32fa07c1613f15fcf4bf54ec8\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cc6c845176bbdca205e7c9628ea993ed70da3b2516bac35d68d9f52059fad674\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234421961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:06dcb25b4ae74ef159663cc2318f84e4665c7889b38ed62940259e5edd2b576f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a81101fb2bf3c75acf3e62bf09b19b67bccbde0faf09bd379a491f5eadb8afc1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213098166},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13\\\"],\\\"sizeBytes\\\":875178413},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471\\\"],\\\"sizeBytes\\\":552251951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\\\"],\\\"sizeBytes\\\":508404525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30\\\"],\\\"sizeBytes\\\":499489508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49\\\"],\\\"sizeBytes\\\":465648392},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb\\\"],\\\"sizeBytes\\\":465507019},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d\\\"],\\\"sizeBytes\\\":462065055}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": context deadline exceeded" Feb 16 21:02:21.470119 master-0 kubenswrapper[7926]: E0216 21:02:21.469916 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:02:28.224397 master-0 kubenswrapper[7926]: I0216 21:02:28.224335 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/2.log" Feb 16 21:02:28.224977 master-0 kubenswrapper[7926]: I0216 21:02:28.224835 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/1.log" Feb 16 21:02:28.226013 master-0 kubenswrapper[7926]: I0216 21:02:28.225946 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/0.log" Feb 16 21:02:28.226112 master-0 kubenswrapper[7926]: I0216 21:02:28.226057 7926 generic.go:334] "Generic (PLEG): container finished" podID="8b648d9e-a892-4951-b0e2-fed6b16273d4" containerID="85337e79dc5b98043d14ed182cca1ddb76f517beb26b734efc337c20a18b289f" exitCode=1 Feb 16 21:02:28.452173 master-0 kubenswrapper[7926]: I0216 21:02:28.452096 7926 scope.go:117] "RemoveContainer" containerID="ffb676f67b4284795ed9016656d43ca3b8d0c5d83ea808c4b84c0f1bccf3bdd0" Feb 16 21:02:29.234729 master-0 kubenswrapper[7926]: I0216 21:02:29.234547 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="0cc0798e5012d359ad3d59e34898cddf8ad150cc9f48b65f4d686bb956001a13" exitCode=1 Feb 16 21:02:30.972495 master-0 kubenswrapper[7926]: E0216 21:02:30.972355 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:33.585914 master-0 kubenswrapper[7926]: E0216 21:02:33.585822 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 21:02:33.585914 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e" Netns:"/var/run/netns/ea533844-88ca-4b4b-a942-7d9a08ccc30b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:02:33.585914 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:02:33.585914 master-0 kubenswrapper[7926]: > Feb 16 21:02:33.586453 master-0 kubenswrapper[7926]: E0216 21:02:33.585945 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 21:02:33.586453 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e" Netns:"/var/run/netns/ea533844-88ca-4b4b-a942-7d9a08ccc30b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:02:33.586453 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:02:33.586453 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:02:33.586453 master-0 kubenswrapper[7926]: E0216 21:02:33.585988 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 21:02:33.586453 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e" Netns:"/var/run/netns/ea533844-88ca-4b4b-a942-7d9a08ccc30b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:02:33.586453 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:02:33.586453 master-0 kubenswrapper[7926]: > pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:02:33.586453 master-0 kubenswrapper[7926]: E0216 21:02:33.586108 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-sn2nh_openshift-marketplace(f275e79f-923c-4d3a-8ed4-084a122ddcf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-sn2nh_openshift-marketplace(f275e79f-923c-4d3a-8ed4-084a122ddcf4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e\\\" Netns:\\\"/var/run/netns/ea533844-88ca-4b4b-a942-7d9a08ccc30b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-sn2nh" podUID="f275e79f-923c-4d3a-8ed4-084a122ddcf4" Feb 16 21:02:33.606928 master-0 kubenswrapper[7926]: E0216 21:02:33.606822 7926 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 21:02:33.606928 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8" Netns:"/var/run/netns/39e5bfe6-235d-4d80-b791-a6cd1b76c21e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:02:33.606928 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:02:33.606928 master-0 kubenswrapper[7926]: > Feb 16 21:02:33.607113 master-0 kubenswrapper[7926]: E0216 21:02:33.606956 7926 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 21:02:33.607113 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8" Netns:"/var/run/netns/39e5bfe6-235d-4d80-b791-a6cd1b76c21e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:02:33.607113 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:02:33.607113 master-0 kubenswrapper[7926]: > pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:02:33.607113 master-0 kubenswrapper[7926]: E0216 21:02:33.606993 7926 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 21:02:33.607113 master-0 kubenswrapper[7926]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8" Netns:"/var/run/netns/39e5bfe6-235d-4d80-b791-a6cd1b76c21e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 16 21:02:33.607113 master-0 kubenswrapper[7926]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 21:02:33.607113 master-0 kubenswrapper[7926]: > pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:02:33.607320 master-0 kubenswrapper[7926]: E0216 21:02:33.607096 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api(ba294358-051a-4f09-b182-710d3d6778c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api(ba294358-051a-4f09-b182-710d3d6778c5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8\\\" Netns:\\\"/var/run/netns/39e5bfe6-235d-4d80-b791-a6cd1b76c21e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" podUID="ba294358-051a-4f09-b182-710d3d6778c5" Feb 16 21:02:34.855807 master-0 kubenswrapper[7926]: E0216 21:02:34.855739 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 21:02:34.856705 master-0 kubenswrapper[7926]: E0216 21:02:34.856010 7926 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.013s" Feb 16 21:02:34.856705 master-0 kubenswrapper[7926]: I0216 21:02:34.856046 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:02:34.856705 master-0 kubenswrapper[7926]: I0216 21:02:34.856095 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 21:02:34.856705 master-0 kubenswrapper[7926]: I0216 21:02:34.856128 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:02:34.856705 master-0 kubenswrapper[7926]: I0216 21:02:34.856156 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:02:34.856705 master-0 kubenswrapper[7926]: I0216 21:02:34.856177 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:02:34.857259 master-0 kubenswrapper[7926]: I0216 21:02:34.857004 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:02:34.857259 master-0 kubenswrapper[7926]: I0216 21:02:34.857091 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:02:34.858293 master-0 kubenswrapper[7926]: I0216 21:02:34.858233 7926 scope.go:117] "RemoveContainer" containerID="d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93" Feb 16 21:02:34.858490 master-0 kubenswrapper[7926]: I0216 21:02:34.858360 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:02:34.859086 master-0 kubenswrapper[7926]: I0216 21:02:34.859033 7926 scope.go:117] "RemoveContainer" containerID="0cc0798e5012d359ad3d59e34898cddf8ad150cc9f48b65f4d686bb956001a13" Feb 16 21:02:34.859409 master-0 kubenswrapper[7926]: E0216 21:02:34.859363 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:02:34.859409 master-0 kubenswrapper[7926]: I0216 21:02:34.859376 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:02:34.860892 master-0 kubenswrapper[7926]: I0216 21:02:34.859719 7926 scope.go:117] "RemoveContainer" containerID="0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336" Feb 16 21:02:34.860892 master-0 kubenswrapper[7926]: I0216 21:02:34.860104 7926 scope.go:117] "RemoveContainer" containerID="85337e79dc5b98043d14ed182cca1ddb76f517beb26b734efc337c20a18b289f" Feb 16 21:02:34.860892 master-0 kubenswrapper[7926]: E0216 21:02:34.860546 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:02:34.873444 master-0 kubenswrapper[7926]: I0216 21:02:34.873390 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 21:02:35.282546 master-0 kubenswrapper[7926]: I0216 21:02:35.282477 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/1.log" Feb 16 21:02:35.283158 master-0 kubenswrapper[7926]: I0216 21:02:35.283115 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/0.log" Feb 16 21:02:35.286049 master-0 kubenswrapper[7926]: I0216 21:02:35.286004 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/1.log" Feb 16 21:02:35.286682 master-0 kubenswrapper[7926]: I0216 21:02:35.286618 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/0.log" Feb 16 21:02:35.857074 master-0 kubenswrapper[7926]: I0216 21:02:35.856999 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:35.857960 master-0 kubenswrapper[7926]: I0216 21:02:35.857806 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:36.164217 master-0 kubenswrapper[7926]: I0216 21:02:36.164060 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:36.164217 master-0 kubenswrapper[7926]: I0216 21:02:36.164153 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:36.858489 master-0 kubenswrapper[7926]: I0216 21:02:36.858374 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:36.858489 master-0 kubenswrapper[7926]: I0216 21:02:36.858488 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:36.998519 master-0 kubenswrapper[7926]: E0216 21:02:36.998271 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894d5af6553ccfd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:22.759431421 +0000 UTC m=+74.394331751,LastTimestamp:2026-02-16 20:58:22.759431421 +0000 UTC m=+74.394331751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:02:37.859025 master-0 kubenswrapper[7926]: I0216 21:02:37.858889 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:37.860434 master-0 kubenswrapper[7926]: I0216 21:02:37.859075 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:38.471177 master-0 kubenswrapper[7926]: E0216 21:02:38.471019 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:02:39.165378 master-0 kubenswrapper[7926]: I0216 21:02:39.165041 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:39.165378 master-0 kubenswrapper[7926]: I0216 21:02:39.165136 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:40.099200 master-0 kubenswrapper[7926]: I0216 21:02:40.099126 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:40.099452 master-0 kubenswrapper[7926]: I0216 21:02:40.099201 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:40.973028 master-0 kubenswrapper[7926]: E0216 21:02:40.972960 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:42.164567 master-0 kubenswrapper[7926]: I0216 21:02:42.164499 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:42.164567 master-0 kubenswrapper[7926]: I0216 21:02:42.164559 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:43.098322 master-0 kubenswrapper[7926]: I0216 21:02:43.098221 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:43.098322 master-0 kubenswrapper[7926]: I0216 21:02:43.098296 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:46.099320 master-0 kubenswrapper[7926]: I0216 21:02:46.099205 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:46.099320 master-0 kubenswrapper[7926]: I0216 21:02:46.099322 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:47.379376 master-0 kubenswrapper[7926]: I0216 21:02:47.379287 7926 generic.go:334] "Generic (PLEG): container finished" podID="4085413c-9af1-4d2a-ba0f-33b42025cb7f" containerID="ada24a94e3cdaddc38a62024529752b29e1359c42e86c75ebaa514d784cc3fe9" exitCode=0 Feb 16 21:02:49.099815 master-0 kubenswrapper[7926]: I0216 21:02:49.099621 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:49.099815 master-0 kubenswrapper[7926]: I0216 21:02:49.099769 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:50.397458 master-0 kubenswrapper[7926]: I0216 21:02:50.397399 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="7e0471aa80085ed85cb40c9b3c8ab6f80ea1655f1734a052a840a434c72c54f4" exitCode=0 Feb 16 21:02:50.973578 master-0 kubenswrapper[7926]: E0216 21:02:50.973382 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:51.406049 master-0 kubenswrapper[7926]: I0216 21:02:51.405951 7926 generic.go:334] "Generic (PLEG): container finished" podID="99ab949e-bd0d-45a7-95d1-8381d9f1f5f3" containerID="0c4056212013eaff1f5d405532bbe8e1791cff62d95615157652d9167450664a" exitCode=0 Feb 16 21:02:52.098992 master-0 kubenswrapper[7926]: I0216 21:02:52.098860 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:52.099283 master-0 kubenswrapper[7926]: I0216 21:02:52.099010 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:53.908203 master-0 kubenswrapper[7926]: I0216 21:02:53.908118 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Feb 16 21:02:55.098981 master-0 kubenswrapper[7926]: I0216 21:02:55.098881 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:02:55.099537 master-0 kubenswrapper[7926]: I0216 21:02:55.098998 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:02:55.473402 master-0 kubenswrapper[7926]: E0216 21:02:55.473186 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:02:57.735849 master-0 kubenswrapper[7926]: I0216 21:02:57.735627 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Feb 16 21:02:57.976623 master-0 kubenswrapper[7926]: I0216 21:02:57.976576 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:36354->10.128.0.19:8443: read: connection reset by peer" start-of-body= Feb 16 21:02:57.976789 master-0 kubenswrapper[7926]: I0216 21:02:57.976633 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:36354->10.128.0.19:8443: read: connection reset by peer" Feb 16 21:02:58.449875 master-0 kubenswrapper[7926]: I0216 21:02:58.449814 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-8cllz_70d217a9-86b7-47b9-a7da-9ac920b9c7c2/etcd-operator/2.log" Feb 16 21:02:58.450797 master-0 kubenswrapper[7926]: I0216 21:02:58.450723 7926 generic.go:334] "Generic (PLEG): container finished" podID="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" containerID="6b4aa228ac152077a166b064e9b5bf093a0844f95733cd091a0e3bf8ac6b0c9d" exitCode=255 Feb 16 21:02:58.453443 master-0 kubenswrapper[7926]: I0216 21:02:58.453408 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/3.log" Feb 16 21:02:58.454092 master-0 kubenswrapper[7926]: I0216 21:02:58.454061 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/2.log" Feb 16 21:02:58.454620 master-0 kubenswrapper[7926]: I0216 21:02:58.454577 7926 generic.go:334] "Generic (PLEG): container finished" podID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" containerID="121dab1fc95eacb58da984bcdc1166fb24200dd1db3a8ef3613a520edb17c265" exitCode=255 Feb 16 21:02:58.456715 master-0 kubenswrapper[7926]: I0216 21:02:58.456685 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-tvzdw_6b6be6de-6fcc-4f57-b163-fe8f970a01a4/openshift-apiserver-operator/2.log" Feb 16 21:02:58.457372 master-0 kubenswrapper[7926]: I0216 21:02:58.457312 7926 generic.go:334] "Generic (PLEG): container finished" podID="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" containerID="fe90aa9198533517faa6871ececff317856fe5ccb78abe5de0ace1b89b25d9f3" exitCode=255 Feb 16 21:02:58.459999 master-0 kubenswrapper[7926]: I0216 21:02:58.459904 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/3.log" Feb 16 21:02:58.461066 master-0 kubenswrapper[7926]: I0216 21:02:58.460897 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/2.log" Feb 16 21:02:58.462119 master-0 kubenswrapper[7926]: I0216 21:02:58.462056 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/1.log" Feb 16 21:02:58.462265 master-0 kubenswrapper[7926]: I0216 21:02:58.462122 7926 generic.go:334] "Generic (PLEG): container finished" podID="695549c8-d1fc-429d-9c9f-0a5915dc6074" containerID="5652867e32787e74c02e3d9d28965d504ee7ff6f2fcb9263e330c08c917ac73f" exitCode=255 Feb 16 21:02:58.465261 master-0 kubenswrapper[7926]: I0216 21:02:58.465225 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/3.log" Feb 16 21:02:58.466173 master-0 kubenswrapper[7926]: I0216 21:02:58.466146 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/2.log" Feb 16 21:02:58.466968 master-0 kubenswrapper[7926]: I0216 21:02:58.466926 7926 generic.go:334] "Generic (PLEG): container finished" podID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" containerID="63ebdf0c0200865a719bef6bf6aea428a6aed5c1b2a14851e05503627b70b2a7" exitCode=255 Feb 16 21:02:58.469948 master-0 kubenswrapper[7926]: I0216 21:02:58.469909 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/1.log" Feb 16 21:02:58.470964 master-0 kubenswrapper[7926]: I0216 21:02:58.470737 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/0.log" Feb 16 21:02:58.470964 master-0 kubenswrapper[7926]: I0216 21:02:58.470800 7926 generic.go:334] "Generic (PLEG): container finished" podID="aa2e9bbc-3962-45f5-a7cc-2dc059409e70" containerID="d95fdd7082b515ac47df4c4e5100db16158ab71c4fe74d4f5e87ded21ddfd407" exitCode=255 Feb 16 21:02:58.474951 master-0 kubenswrapper[7926]: I0216 21:02:58.474895 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/3.log" Feb 16 21:02:58.475836 master-0 kubenswrapper[7926]: I0216 21:02:58.475747 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/2.log" Feb 16 21:02:58.476582 master-0 kubenswrapper[7926]: I0216 21:02:58.476503 7926 generic.go:334] "Generic (PLEG): container finished" podID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerID="42d2b8ae4604c72ca108f769893f6589ee95474077ff8dd9cf87399459c2ec53" exitCode=255 Feb 16 21:02:58.479736 master-0 kubenswrapper[7926]: I0216 21:02:58.479682 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-q5vjl_2ab0a907-7abe-4808-ba21-bdda1506eae2/service-ca-operator/2.log" Feb 16 21:02:58.481134 master-0 kubenswrapper[7926]: I0216 21:02:58.480361 7926 generic.go:334] "Generic (PLEG): container finished" podID="2ab0a907-7abe-4808-ba21-bdda1506eae2" containerID="a4e5e42cc4ff83859a8656b165ef7357fe4b7dff02702e6e7921002edc0c6d8d" exitCode=255 Feb 16 21:02:58.482817 master-0 kubenswrapper[7926]: I0216 21:02:58.482755 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/2.log" Feb 16 21:02:58.483602 master-0 kubenswrapper[7926]: I0216 21:02:58.483518 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/1.log" Feb 16 21:02:58.484529 master-0 kubenswrapper[7926]: I0216 21:02:58.484497 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/0.log" Feb 16 21:02:58.484529 master-0 kubenswrapper[7926]: I0216 21:02:58.484530 7926 generic.go:334] "Generic (PLEG): container finished" podID="1b61063e-775e-421d-bf73-a6ef134293a0" containerID="c9124f9d5e41db03a56db8d08da400aa35fdd671c20974a9991273c405896bc3" exitCode=255 Feb 16 21:02:58.486756 master-0 kubenswrapper[7926]: I0216 21:02:58.486721 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-xzww8_e7adbe32-b8b9-438e-a2e3-f93146a97424/kube-scheduler-operator-container/2.log" Feb 16 21:02:58.487461 master-0 kubenswrapper[7926]: I0216 21:02:58.487413 7926 generic.go:334] "Generic (PLEG): container finished" podID="e7adbe32-b8b9-438e-a2e3-f93146a97424" containerID="b14701382aa95b48c51ea29fa658b5538f88b2a7a4c18fcdfc110d59ae2c79fe" exitCode=255 Feb 16 21:02:58.489286 master-0 kubenswrapper[7926]: I0216 21:02:58.489199 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/3.log" Feb 16 21:02:58.490159 master-0 kubenswrapper[7926]: I0216 21:02:58.490101 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/2.log" Feb 16 21:02:58.491968 master-0 kubenswrapper[7926]: I0216 21:02:58.491840 7926 generic.go:334] "Generic (PLEG): container finished" podID="59237aa6-6250-4619-8ee5-abae59f04b57" containerID="17079b6bb35f03cd05daf5c195f411f2535030b49cc220f1d1c122f18282a8c6" exitCode=255 Feb 16 21:02:58.498161 master-0 kubenswrapper[7926]: I0216 21:02:58.498131 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-kh4d4_2506c282-0b37-4ece-8a0c-885d0b7f7901/cluster-node-tuning-operator/0.log" Feb 16 21:02:58.498255 master-0 kubenswrapper[7926]: I0216 21:02:58.498168 7926 generic.go:334] "Generic (PLEG): container finished" podID="2506c282-0b37-4ece-8a0c-885d0b7f7901" containerID="24435a7f63a96b1a49a7d14efbc7fac8f5f69a776a662db4bff0a9f0d5933f6b" exitCode=1 Feb 16 21:02:58.501619 master-0 kubenswrapper[7926]: I0216 21:02:58.501587 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/2.log" Feb 16 21:02:58.502260 master-0 kubenswrapper[7926]: I0216 21:02:58.502226 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/1.log" Feb 16 21:02:58.502741 master-0 kubenswrapper[7926]: I0216 21:02:58.502712 7926 generic.go:334] "Generic (PLEG): container finished" podID="0b02b740-5698-4e9a-90fe-2873bd0b0958" containerID="467db04b7bff5a3b4be9912b3821541f7f7357f38d787b4e261ea72ceb3d15af" exitCode=255 Feb 16 21:03:00.099405 master-0 kubenswrapper[7926]: I0216 21:03:00.099326 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:00.099405 master-0 kubenswrapper[7926]: I0216 21:03:00.099394 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:00.518203 master-0 kubenswrapper[7926]: I0216 21:03:00.518149 7926 generic.go:334] "Generic (PLEG): container finished" podID="5e062e07-8076-444c-b476-4eb2848e9613" containerID="8d6fd2d30a1b00edfb997113793ad55fbf5dca8c4b949fed22018dbb444c09ad" exitCode=0 Feb 16 21:03:00.974856 master-0 kubenswrapper[7926]: E0216 21:03:00.974793 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:00.975227 master-0 kubenswrapper[7926]: E0216 21:03:00.975195 7926 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:03:03.099047 master-0 kubenswrapper[7926]: I0216 21:03:03.098966 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:03.099047 master-0 kubenswrapper[7926]: I0216 21:03:03.099039 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:03.908370 master-0 kubenswrapper[7926]: I0216 21:03:03.908265 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Feb 16 21:03:04.545490 master-0 kubenswrapper[7926]: I0216 21:03:04.545407 7926 generic.go:334] "Generic (PLEG): container finished" podID="9e0227bc-63f5-48be-95dc-1323a2b2e327" containerID="a7330b931340d1be5dba0fd54e8b246009c00f6e813142a46ee5264b4ff67461" exitCode=0 Feb 16 21:03:05.553591 master-0 kubenswrapper[7926]: I0216 21:03:05.553516 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/2.log" Feb 16 21:03:05.554387 master-0 kubenswrapper[7926]: I0216 21:03:05.554205 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/1.log" Feb 16 21:03:05.554947 master-0 kubenswrapper[7926]: I0216 21:03:05.554898 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/0.log" Feb 16 21:03:05.555052 master-0 kubenswrapper[7926]: I0216 21:03:05.554964 7926 generic.go:334] "Generic (PLEG): container finished" podID="b1ac9776-54c4-46ce-b898-01c8cf35e593" containerID="065597b5437e593f0a8e56b505329babf0faf4f1f2e62294ff4f61a62c0f9e9c" exitCode=1 Feb 16 21:03:06.098744 master-0 kubenswrapper[7926]: I0216 21:03:06.098577 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:06.098744 master-0 kubenswrapper[7926]: I0216 21:03:06.098733 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:06.561675 master-0 kubenswrapper[7926]: I0216 21:03:06.561620 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/2.log" Feb 16 21:03:06.562220 master-0 kubenswrapper[7926]: I0216 21:03:06.562192 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/1.log" Feb 16 21:03:06.562619 master-0 kubenswrapper[7926]: I0216 21:03:06.562590 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/0.log" Feb 16 21:03:06.562698 master-0 kubenswrapper[7926]: I0216 21:03:06.562622 7926 generic.go:334] "Generic (PLEG): container finished" podID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerID="bc0c280e8d6f945eb33fad59cb0d8a4aedc8f5ca975f567efb9b9400f3b825d3" exitCode=255 Feb 16 21:03:07.735740 master-0 kubenswrapper[7926]: I0216 21:03:07.735607 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Feb 16 21:03:08.762543 master-0 kubenswrapper[7926]: I0216 21:03:08.762442 7926 status_manager.go:851] "Failed to get status for pod" podUID="e9615af2-cad5-4705-9c2f-6f3c97026100" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods insights-operator-cb4f7b4cf-h8f7q)" Feb 16 21:03:08.876454 master-0 kubenswrapper[7926]: E0216 21:03:08.876391 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 21:03:08.876634 master-0 kubenswrapper[7926]: E0216 21:03:08.876602 7926 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.02s" Feb 16 21:03:08.877326 master-0 kubenswrapper[7926]: I0216 21:03:08.877256 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3"} Feb 16 21:03:08.877326 master-0 kubenswrapper[7926]: I0216 21:03:08.877328 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:03:08.877559 master-0 kubenswrapper[7926]: I0216 21:03:08.877350 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:03:08.877559 master-0 kubenswrapper[7926]: I0216 21:03:08.877328 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:03:08.877559 master-0 kubenswrapper[7926]: I0216 21:03:08.877483 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:03:08.877983 master-0 kubenswrapper[7926]: I0216 21:03:08.877952 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:03:08.879496 master-0 kubenswrapper[7926]: I0216 21:03:08.878959 7926 scope.go:117] "RemoveContainer" containerID="85337e79dc5b98043d14ed182cca1ddb76f517beb26b734efc337c20a18b289f" Feb 16 21:03:08.879947 master-0 kubenswrapper[7926]: I0216 21:03:08.879921 7926 scope.go:117] "RemoveContainer" containerID="0c4056212013eaff1f5d405532bbe8e1791cff62d95615157652d9167450664a" Feb 16 21:03:08.880055 master-0 kubenswrapper[7926]: I0216 21:03:08.880018 7926 scope.go:117] "RemoveContainer" containerID="8d6fd2d30a1b00edfb997113793ad55fbf5dca8c4b949fed22018dbb444c09ad" Feb 16 21:03:08.880254 master-0 kubenswrapper[7926]: I0216 21:03:08.880215 7926 scope.go:117] "RemoveContainer" containerID="42d2b8ae4604c72ca108f769893f6589ee95474077ff8dd9cf87399459c2ec53" Feb 16 21:03:08.880332 master-0 kubenswrapper[7926]: E0216 21:03:08.880291 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-olm-operator pod=cluster-olm-operator-55b69c6c48-pdjn4_openshift-cluster-olm-operator(5e062e07-8076-444c-b476-4eb2848e9613)\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" podUID="5e062e07-8076-444c-b476-4eb2848e9613" Feb 16 21:03:08.880441 master-0 kubenswrapper[7926]: E0216 21:03:08.880410 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:03:08.880715 master-0 kubenswrapper[7926]: I0216 21:03:08.880679 7926 scope.go:117] "RemoveContainer" containerID="a7330b931340d1be5dba0fd54e8b246009c00f6e813142a46ee5264b4ff67461" Feb 16 21:03:08.881062 master-0 kubenswrapper[7926]: I0216 21:03:08.881026 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:03:08.881443 master-0 kubenswrapper[7926]: I0216 21:03:08.881396 7926 scope.go:117] "RemoveContainer" containerID="121dab1fc95eacb58da984bcdc1166fb24200dd1db3a8ef3613a520edb17c265" Feb 16 21:03:08.881766 master-0 kubenswrapper[7926]: I0216 21:03:08.881606 7926 scope.go:117] "RemoveContainer" containerID="d95fdd7082b515ac47df4c4e5100db16158ab71c4fe74d4f5e87ded21ddfd407" Feb 16 21:03:08.881766 master-0 kubenswrapper[7926]: E0216 21:03:08.881681 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-56v4p_openshift-kube-storage-version-migrator-operator(c7333319-3fe6-4b3f-b600-6b6df49fcaff)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" podUID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" Feb 16 21:03:08.881902 master-0 kubenswrapper[7926]: I0216 21:03:08.881835 7926 scope.go:117] "RemoveContainer" containerID="0cc0798e5012d359ad3d59e34898cddf8ad150cc9f48b65f4d686bb956001a13" Feb 16 21:03:08.881902 master-0 kubenswrapper[7926]: I0216 21:03:08.881870 7926 scope.go:117] "RemoveContainer" containerID="7e0471aa80085ed85cb40c9b3c8ab6f80ea1655f1734a052a840a434c72c54f4" Feb 16 21:03:08.882118 master-0 kubenswrapper[7926]: I0216 21:03:08.882074 7926 scope.go:117] "RemoveContainer" containerID="6b4aa228ac152077a166b064e9b5bf093a0844f95733cd091a0e3bf8ac6b0c9d" Feb 16 21:03:08.883184 master-0 kubenswrapper[7926]: I0216 21:03:08.883142 7926 scope.go:117] "RemoveContainer" containerID="fe90aa9198533517faa6871ececff317856fe5ccb78abe5de0ace1b89b25d9f3" Feb 16 21:03:08.883446 master-0 kubenswrapper[7926]: I0216 21:03:08.883401 7926 scope.go:117] "RemoveContainer" containerID="a4e5e42cc4ff83859a8656b165ef7357fe4b7dff02702e6e7921002edc0c6d8d" Feb 16 21:03:08.883535 master-0 kubenswrapper[7926]: I0216 21:03:08.883480 7926 scope.go:117] "RemoveContainer" containerID="467db04b7bff5a3b4be9912b3821541f7f7357f38d787b4e261ea72ceb3d15af" Feb 16 21:03:08.883814 master-0 kubenswrapper[7926]: E0216 21:03:08.883685 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-cl5ld_openshift-kube-apiserver-operator(0b02b740-5698-4e9a-90fe-2873bd0b0958)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" podUID="0b02b740-5698-4e9a-90fe-2873bd0b0958" Feb 16 21:03:08.885158 master-0 kubenswrapper[7926]: I0216 21:03:08.885081 7926 scope.go:117] "RemoveContainer" containerID="bc0c280e8d6f945eb33fad59cb0d8a4aedc8f5ca975f567efb9b9400f3b825d3" Feb 16 21:03:08.885416 master-0 kubenswrapper[7926]: E0216 21:03:08.885371 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:03:08.889790 master-0 kubenswrapper[7926]: I0216 21:03:08.889639 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 21:03:09.099532 master-0 kubenswrapper[7926]: I0216 21:03:09.099479 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:09.099722 master-0 kubenswrapper[7926]: I0216 21:03:09.099534 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:09.587107 master-0 kubenswrapper[7926]: I0216 21:03:09.586976 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-8cllz_70d217a9-86b7-47b9-a7da-9ac920b9c7c2/etcd-operator/2.log" Feb 16 21:03:09.589760 master-0 kubenswrapper[7926]: I0216 21:03:09.589729 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-q5vjl_2ab0a907-7abe-4808-ba21-bdda1506eae2/service-ca-operator/2.log" Feb 16 21:03:09.592553 master-0 kubenswrapper[7926]: I0216 21:03:09.592508 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-tvzdw_6b6be6de-6fcc-4f57-b163-fe8f970a01a4/openshift-apiserver-operator/2.log" Feb 16 21:03:09.595127 master-0 kubenswrapper[7926]: I0216 21:03:09.595095 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/1.log" Feb 16 21:03:09.595782 master-0 kubenswrapper[7926]: I0216 21:03:09.595738 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/0.log" Feb 16 21:03:09.615726 master-0 kubenswrapper[7926]: I0216 21:03:09.615690 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/2.log" Feb 16 21:03:09.616346 master-0 kubenswrapper[7926]: I0216 21:03:09.616310 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/1.log" Feb 16 21:03:09.617197 master-0 kubenswrapper[7926]: I0216 21:03:09.617152 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/0.log" Feb 16 21:03:09.617682 master-0 kubenswrapper[7926]: I0216 21:03:09.617627 7926 scope.go:117] "RemoveContainer" containerID="42d2b8ae4604c72ca108f769893f6589ee95474077ff8dd9cf87399459c2ec53" Feb 16 21:03:09.617854 master-0 kubenswrapper[7926]: E0216 21:03:09.617822 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:03:11.002834 master-0 kubenswrapper[7926]: E0216 21:03:11.002532 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{openshift-controller-manager-operator-5f5f84757d-k42w9.1894d59e7e14b4d6 openshift-controller-manager-operator 3863 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager-operator,Name:openshift-controller-manager-operator-5f5f84757d-k42w9,UID:695549c8-d1fc-429d-9c9f-0a5915dc6074,APIVersion:v1,ResourceVersion:3733,FieldPath:spec.containers{openshift-controller-manager-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:57:10 +0000 UTC,LastTimestamp:2026-02-16 20:58:22.759477372 +0000 UTC m=+74.394377672,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:03:12.099295 master-0 kubenswrapper[7926]: I0216 21:03:12.099203 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:12.099295 master-0 kubenswrapper[7926]: I0216 21:03:12.099280 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:12.474564 master-0 kubenswrapper[7926]: E0216 21:03:12.474368 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:03:15.099335 master-0 kubenswrapper[7926]: I0216 21:03:15.099232 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:15.099335 master-0 kubenswrapper[7926]: I0216 21:03:15.099307 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:18.099564 master-0 kubenswrapper[7926]: I0216 21:03:18.099470 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:18.100139 master-0 kubenswrapper[7926]: I0216 21:03:18.099574 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:21.030071 master-0 kubenswrapper[7926]: E0216 21:03:21.029860 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:03:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:03:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:03:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:03:11Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e2f869b1c4f98a628b2e54c1516a0d0c09c760c91e0e1a940cb76149217661b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:97930d07a108f20287bd5ceb046a5ab125604b2e3564077db9f7d7c077cc5852\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701129928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:0b4dc203ac00318362470f07842ed97dc1c724d32fa07c1613f15fcf4bf54ec8\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cc6c845176bbdca205e7c9628ea993ed70da3b2516bac35d68d9f52059fad674\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234421961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:06dcb25b4ae74ef159663cc2318f84e4665c7889b38ed62940259e5edd2b576f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a81101fb2bf3c75acf3e62bf09b19b67bccbde0faf09bd379a491f5eadb8afc1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213098166},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13\\\"],\\\"sizeBytes\\\":875178413},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471\\\"],\\\"sizeBytes\\\":552251951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\\\"],\\\"sizeBytes\\\":508404525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30\\\"],\\\"sizeBytes\\\":499489508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49\\\"],\\\"sizeBytes\\\":465648392},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb\\\"],\\\"sizeBytes\\\":465507019},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d\\\"],\\\"sizeBytes\\\":462065055}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:21.099632 master-0 kubenswrapper[7926]: I0216 21:03:21.099544 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:21.099900 master-0 kubenswrapper[7926]: I0216 21:03:21.099718 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:21.888279 master-0 kubenswrapper[7926]: E0216 21:03:21.888210 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 21:03:24.099597 master-0 kubenswrapper[7926]: I0216 21:03:24.099502 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:24.099597 master-0 kubenswrapper[7926]: I0216 21:03:24.099588 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:26.761526 master-0 kubenswrapper[7926]: I0216 21:03:26.760945 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-9m94g_4b035e85-b2b0-4dee-bb86-3465fc4b98a8/package-server-manager/0.log" Feb 16 21:03:26.762709 master-0 kubenswrapper[7926]: I0216 21:03:26.762582 7926 generic.go:334] "Generic (PLEG): container finished" podID="4b035e85-b2b0-4dee-bb86-3465fc4b98a8" containerID="95cb75164641c9de6a0109a60c606bf650f57a11a7796ffdbcb05ca7aa385e4c" exitCode=1 Feb 16 21:03:27.099307 master-0 kubenswrapper[7926]: I0216 21:03:27.099111 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:27.099307 master-0 kubenswrapper[7926]: I0216 21:03:27.099173 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:27.776446 master-0 kubenswrapper[7926]: I0216 21:03:27.776357 7926 generic.go:334] "Generic (PLEG): container finished" podID="e9615af2-cad5-4705-9c2f-6f3c97026100" containerID="43a48a6592fa00c02a3165bc38965569bd23dac45b30b2fdc517303872a72e62" exitCode=0 Feb 16 21:03:28.501991 master-0 kubenswrapper[7926]: I0216 21:03:28.501847 7926 scope.go:117] "RemoveContainer" containerID="ee117aab23c2955afe2d46ebc740378a94898d9f452c30c51846fd6b5013569e" Feb 16 21:03:28.787317 master-0 kubenswrapper[7926]: I0216 21:03:28.787141 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/1.log" Feb 16 21:03:28.788997 master-0 kubenswrapper[7926]: I0216 21:03:28.788937 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/0.log" Feb 16 21:03:28.789114 master-0 kubenswrapper[7926]: I0216 21:03:28.789021 7926 generic.go:334] "Generic (PLEG): container finished" podID="cef33294-81fb-41a2-811d-2565f94514d1" containerID="5b1674388d3a0d8fb07d284207cc23840a32ef17ddc0f1ef774d2188e32d3e84" exitCode=1 Feb 16 21:03:29.476989 master-0 kubenswrapper[7926]: E0216 21:03:29.476810 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:03:30.099039 master-0 kubenswrapper[7926]: I0216 21:03:30.098957 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:30.099879 master-0 kubenswrapper[7926]: I0216 21:03:30.099036 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:31.030614 master-0 kubenswrapper[7926]: E0216 21:03:31.030542 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:31.659919 master-0 kubenswrapper[7926]: I0216 21:03:31.659786 7926 patch_prober.go:28] interesting pod/package-server-manager-5c696dbdcd-9m94g container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 16 21:03:31.660641 master-0 kubenswrapper[7926]: I0216 21:03:31.659916 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" podUID="4b035e85-b2b0-4dee-bb86-3465fc4b98a8" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 16 21:03:31.660641 master-0 kubenswrapper[7926]: I0216 21:03:31.659960 7926 patch_prober.go:28] interesting pod/package-server-manager-5c696dbdcd-9m94g container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" start-of-body= Feb 16 21:03:31.660641 master-0 kubenswrapper[7926]: I0216 21:03:31.660027 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" podUID="4b035e85-b2b0-4dee-bb86-3465fc4b98a8" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.20:8080/healthz\": dial tcp 10.128.0.20:8080: connect: connection refused" Feb 16 21:03:33.099149 master-0 kubenswrapper[7926]: I0216 21:03:33.099053 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:33.099149 master-0 kubenswrapper[7926]: I0216 21:03:33.099117 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:33.836851 master-0 kubenswrapper[7926]: E0216 21:03:33.836791 7926 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="24.959s" Feb 16 21:03:33.837059 master-0 kubenswrapper[7926]: I0216 21:03:33.836867 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:33.837059 master-0 kubenswrapper[7926]: I0216 21:03:33.837006 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:03:33.837059 master-0 kubenswrapper[7926]: I0216 21:03:33.837037 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:03:33.837702 master-0 kubenswrapper[7926]: I0216 21:03:33.837056 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerDied","Data":"22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc"} Feb 16 21:03:33.839671 master-0 kubenswrapper[7926]: I0216 21:03:33.837773 7926 scope.go:117] "RemoveContainer" containerID="121dab1fc95eacb58da984bcdc1166fb24200dd1db3a8ef3613a520edb17c265" Feb 16 21:03:33.839671 master-0 kubenswrapper[7926]: I0216 21:03:33.839248 7926 scope.go:117] "RemoveContainer" containerID="467db04b7bff5a3b4be9912b3821541f7f7357f38d787b4e261ea72ceb3d15af" Feb 16 21:03:33.839671 master-0 kubenswrapper[7926]: I0216 21:03:33.839534 7926 scope.go:117] "RemoveContainer" containerID="8d6fd2d30a1b00edfb997113793ad55fbf5dca8c4b949fed22018dbb444c09ad" Feb 16 21:03:33.840325 master-0 kubenswrapper[7926]: I0216 21:03:33.840146 7926 scope.go:117] "RemoveContainer" containerID="bc0c280e8d6f945eb33fad59cb0d8a4aedc8f5ca975f567efb9b9400f3b825d3" Feb 16 21:03:33.841014 master-0 kubenswrapper[7926]: I0216 21:03:33.840633 7926 scope.go:117] "RemoveContainer" containerID="42d2b8ae4604c72ca108f769893f6589ee95474077ff8dd9cf87399459c2ec53" Feb 16 21:03:33.842946 master-0 kubenswrapper[7926]: I0216 21:03:33.842923 7926 scope.go:117] "RemoveContainer" containerID="c9124f9d5e41db03a56db8d08da400aa35fdd671c20974a9991273c405896bc3" Feb 16 21:03:33.844784 master-0 kubenswrapper[7926]: I0216 21:03:33.844718 7926 scope.go:117] "RemoveContainer" containerID="335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9" Feb 16 21:03:33.853392 master-0 kubenswrapper[7926]: I0216 21:03:33.853331 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 21:03:33.872475 master-0 kubenswrapper[7926]: I0216 21:03:33.872431 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:03:33.872826 master-0 kubenswrapper[7926]: W0216 21:03:33.872701 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod319dc882_e1f5_40f9_99f4_2bae028337e5.slice/crio-a5c8e6b51575e43d26e0817313f1ec460f29cff6ceb6629a7a5e2f186f585513 WatchSource:0}: Error finding container a5c8e6b51575e43d26e0817313f1ec460f29cff6ceb6629a7a5e2f186f585513: Status 404 returned error can't find the container with id a5c8e6b51575e43d26e0817313f1ec460f29cff6ceb6629a7a5e2f186f585513 Feb 16 21:03:33.872939 master-0 kubenswrapper[7926]: I0216 21:03:33.872784 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:03:33.873006 master-0 kubenswrapper[7926]: I0216 21:03:33.872960 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 21:03:33.873132 master-0 kubenswrapper[7926]: I0216 21:03:33.873088 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:03:33.873235 master-0 kubenswrapper[7926]: I0216 21:03:33.873148 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965","Type":"ContainerDied","Data":"5f4f1f7bf4711de84107b1c6040a91b2b71847aa5f151a70149a5a43fdbb16fc"} Feb 16 21:03:33.873311 master-0 kubenswrapper[7926]: I0216 21:03:33.873257 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 21:03:33.873367 master-0 kubenswrapper[7926]: I0216 21:03:33.873316 7926 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="c9a5c8b5-008f-47c7-b18e-4ee1cd779655" Feb 16 21:03:33.873441 master-0 kubenswrapper[7926]: I0216 21:03:33.873378 7926 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" containerID="cri-o://03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3" Feb 16 21:03:33.873509 master-0 kubenswrapper[7926]: I0216 21:03:33.873425 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 21:03:33.873509 master-0 kubenswrapper[7926]: I0216 21:03:33.873466 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 21:03:33.873509 master-0 kubenswrapper[7926]: I0216 21:03:33.873482 7926 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="c9a5c8b5-008f-47c7-b18e-4ee1cd779655" Feb 16 21:03:33.873719 master-0 kubenswrapper[7926]: I0216 21:03:33.873531 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerDied","Data":"8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669"} Feb 16 21:03:33.873719 master-0 kubenswrapper[7926]: I0216 21:03:33.873621 7926 status_manager.go:317] "Container readiness changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://0cc0798e5012d359ad3d59e34898cddf8ad150cc9f48b65f4d686bb956001a13" Feb 16 21:03:33.873719 master-0 kubenswrapper[7926]: I0216 21:03:33.873643 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:33.873971 master-0 kubenswrapper[7926]: I0216 21:03:33.873764 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4"] Feb 16 21:03:33.873971 master-0 kubenswrapper[7926]: I0216 21:03:33.873790 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb"] Feb 16 21:03:33.874170 master-0 kubenswrapper[7926]: I0216 21:03:33.874136 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:03:33.874305 master-0 kubenswrapper[7926]: I0216 21:03:33.874275 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sn2nh"] Feb 16 21:03:33.874549 master-0 kubenswrapper[7926]: I0216 21:03:33.874510 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" event={"ID":"2ab0a907-7abe-4808-ba21-bdda1506eae2","Type":"ContainerDied","Data":"0e76905998b63e1ca06bb636f257a337f36ba01b7d03a406ab7d6fa3bdb3b545"} Feb 16 21:03:33.874717 master-0 kubenswrapper[7926]: I0216 21:03:33.874642 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:03:33.874788 master-0 kubenswrapper[7926]: I0216 21:03:33.874722 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerDied","Data":"c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a"} Feb 16 21:03:33.874788 master-0 kubenswrapper[7926]: I0216 21:03:33.874754 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:03:33.874879 master-0 kubenswrapper[7926]: I0216 21:03:33.874815 7926 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" containerID="cri-o://d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93" Feb 16 21:03:33.876273 master-0 kubenswrapper[7926]: I0216 21:03:33.874831 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:03:33.876360 master-0 kubenswrapper[7926]: I0216 21:03:33.876322 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5kwc"] Feb 16 21:03:33.876424 master-0 kubenswrapper[7926]: I0216 21:03:33.876396 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerDied","Data":"58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372"} Feb 16 21:03:33.876470 master-0 kubenswrapper[7926]: I0216 21:03:33.876458 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 21:03:33.876519 master-0 kubenswrapper[7926]: I0216 21:03:33.876478 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerDied","Data":"a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c"} Feb 16 21:03:33.876519 master-0 kubenswrapper[7926]: I0216 21:03:33.876504 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-tpj6f" event={"ID":"88f19cea-60ed-4977-a906-75deec51fc3d","Type":"ContainerDied","Data":"d0734d0596c43a54e8c5763783b157c38da058f6ee7d80add1702898fd0efe5d"} Feb 16 21:03:33.876671 master-0 kubenswrapper[7926]: I0216 21:03:33.876526 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:03:33.876671 master-0 kubenswrapper[7926]: I0216 21:03:33.876545 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerDied","Data":"6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9"} Feb 16 21:03:33.876671 master-0 kubenswrapper[7926]: I0216 21:03:33.876567 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:33.876671 master-0 kubenswrapper[7926]: I0216 21:03:33.876591 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerDied","Data":"f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112"} Feb 16 21:03:33.876671 master-0 kubenswrapper[7926]: I0216 21:03:33.876611 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" event={"ID":"6b6be6de-6fcc-4f57-b163-fe8f970a01a4","Type":"ContainerDied","Data":"75d7b146641140c312956826b413c80f7862cac93292ebbdd2b6b13f8e1b06a3"} Feb 16 21:03:33.876671 master-0 kubenswrapper[7926]: I0216 21:03:33.876676 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 21:03:33.877010 master-0 kubenswrapper[7926]: I0216 21:03:33.876702 7926 status_manager.go:317] "Container readiness changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://7e0471aa80085ed85cb40c9b3c8ab6f80ea1655f1734a052a840a434c72c54f4" Feb 16 21:03:33.877010 master-0 kubenswrapper[7926]: I0216 21:03:33.876717 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:33.877010 master-0 kubenswrapper[7926]: I0216 21:03:33.876734 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:03:33.877010 master-0 kubenswrapper[7926]: I0216 21:03:33.876791 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:03:33.877010 master-0 kubenswrapper[7926]: I0216 21:03:33.876849 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:03:33.877010 master-0 kubenswrapper[7926]: I0216 21:03:33.876874 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:03:33.877010 master-0 kubenswrapper[7926]: I0216 21:03:33.876901 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:03:33.877010 master-0 kubenswrapper[7926]: I0216 21:03:33.876931 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" event={"ID":"e7adbe32-b8b9-438e-a2e3-f93146a97424","Type":"ContainerDied","Data":"34f0b2189e90cc7801c4026c4ab900cc1fc9f5ac2f006e83f5fec81671df191f"} Feb 16 21:03:33.877010 master-0 kubenswrapper[7926]: I0216 21:03:33.876964 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:33.877010 master-0 kubenswrapper[7926]: I0216 21:03:33.876991 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerStarted","Data":"b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78"} Feb 16 21:03:33.877010 master-0 kubenswrapper[7926]: I0216 21:03:33.874996 7926 scope.go:117] "RemoveContainer" containerID="63ebdf0c0200865a719bef6bf6aea428a6aed5c1b2a14851e05503627b70b2a7" Feb 16 21:03:33.877581 master-0 kubenswrapper[7926]: I0216 21:03:33.877473 7926 scope.go:117] "RemoveContainer" containerID="b14701382aa95b48c51ea29fa658b5538f88b2a7a4c18fcdfc110d59ae2c79fe" Feb 16 21:03:33.877733 master-0 kubenswrapper[7926]: I0216 21:03:33.877021 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:03:33.877883 master-0 kubenswrapper[7926]: I0216 21:03:33.877861 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:03:33.878025 master-0 kubenswrapper[7926]: I0216 21:03:33.877991 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerDied","Data":"ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8"} Feb 16 21:03:33.878155 master-0 kubenswrapper[7926]: I0216 21:03:33.878129 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a"} Feb 16 21:03:33.878291 master-0 kubenswrapper[7926]: I0216 21:03:33.878267 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" event={"ID":"e9615af2-cad5-4705-9c2f-6f3c97026100","Type":"ContainerDied","Data":"dd23c2441236e3bdedd04adcd70f26ba2f2b37ed96fb0998ec94c3bbdca5b7da"} Feb 16 21:03:33.878435 master-0 kubenswrapper[7926]: I0216 21:03:33.877760 7926 scope.go:117] "RemoveContainer" containerID="17079b6bb35f03cd05daf5c195f411f2535030b49cc220f1d1c122f18282a8c6" Feb 16 21:03:33.878603 master-0 kubenswrapper[7926]: I0216 21:03:33.878405 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:33.878723 master-0 kubenswrapper[7926]: I0216 21:03:33.878612 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965","Type":"ContainerDied","Data":"b0c2e1a17593c2d9cad62fca4b76d1bcb53b42211c4063cb3d0e8c42005672a2"} Feb 16 21:03:33.878723 master-0 kubenswrapper[7926]: I0216 21:03:33.878640 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0c2e1a17593c2d9cad62fca4b76d1bcb53b42211c4063cb3d0e8c42005672a2" Feb 16 21:03:33.878723 master-0 kubenswrapper[7926]: I0216 21:03:33.878684 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:03:33.878723 master-0 kubenswrapper[7926]: I0216 21:03:33.878703 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerStarted","Data":"335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9"} Feb 16 21:03:33.878723 master-0 kubenswrapper[7926]: I0216 21:03:33.878723 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerStarted","Data":"796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878745 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerStarted","Data":"4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878764 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878783 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerStarted","Data":"47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878802 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-tpj6f" event={"ID":"88f19cea-60ed-4977-a906-75deec51fc3d","Type":"ContainerStarted","Data":"035e7d01b329ab00b5fb0dd3b6a5b55ee6bd504dee86517456bdcc1b06cd6e19"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878832 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878850 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"b09d3c16-18e3-45b3-9d39-949d2464b300","Type":"ContainerDied","Data":"a1a7ba08e2cc5089762afc7ce295fbadf271a58f2006a34cf3be8f3b16ca4e70"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878871 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a7ba08e2cc5089762afc7ce295fbadf271a58f2006a34cf3be8f3b16ca4e70" Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878891 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerStarted","Data":"b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878909 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" event={"ID":"b28234d1-1d9a-4d9f-9ad1-e3c682bed492","Type":"ContainerDied","Data":"1fdce62d33ee01800252ab5e608745339a8f0dbc0ccac60559c706daa3409f0f"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878929 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" event={"ID":"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b","Type":"ContainerDied","Data":"b1ac78292de0a544c15af274111c4e933c90f41d601dad32fc19d3dacdb54345"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878951 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" event={"ID":"e8194cdc-3133-49e2-9579-a747c0bf2b16","Type":"ContainerDied","Data":"a76963335874f22d97778041d73ee6a0a7e3ffd325f9fb8a457626be3c8e5238"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878970 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerDied","Data":"6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.878994 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerDied","Data":"da2d8128d877c8e59ec552f44d9719195718721aa40536dc7418200005684242"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.879017 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" event={"ID":"aa2e9bbc-3962-45f5-a7cc-2dc059409e70","Type":"ContainerDied","Data":"a339e5c4723737e030c5a03c8395cedd263d3d5213cb12208bfe3004bbd0ef5e"} Feb 16 21:03:33.879012 master-0 kubenswrapper[7926]: I0216 21:03:33.879039 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerDied","Data":"2b191efabecfa6e89d563189d25950b732d83b54240d68732d9bfb22ddbb8e4f"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879061 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" event={"ID":"70d217a9-86b7-47b9-a7da-9ac920b9c7c2","Type":"ContainerDied","Data":"e960726eec7f4c030bcd77b5c00f9a27240da71756776e4b20d66b6c394494f7"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879082 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" event={"ID":"302156cc-9dca-4a66-9e6a-ba2c7e738c92","Type":"ContainerDied","Data":"03d8daaa264d52b607ef3a2e1ee4da18d94e4e7433715288335ef0a92bd90db1"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879103 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerDied","Data":"b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879125 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" event={"ID":"484154d0-66c8-4d0e-bf1b-f48d0abfe628","Type":"ContainerDied","Data":"fd75cc94a5c6af861419130cf9adb9c00eea8b412cbb5bebb25e798a841c1376"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879146 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerDied","Data":"ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879166 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" event={"ID":"57b94ed4-8f0b-4223-bdaf-4316859d8ad3","Type":"ContainerDied","Data":"03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879186 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" event={"ID":"c62bb2b4-1469-4e0d-810f-cd6e21ee908a","Type":"ContainerDied","Data":"f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879205 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerDied","Data":"796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879225 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerDied","Data":"b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879248 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerDied","Data":"4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879268 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerDied","Data":"47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879287 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerDied","Data":"335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879308 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" event={"ID":"484154d0-66c8-4d0e-bf1b-f48d0abfe628","Type":"ContainerStarted","Data":"784108aeefea86df821b8787cc4aa96e0a0d0b443e8ed52de36e36ad7f22bb5e"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879326 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" event={"ID":"b28234d1-1d9a-4d9f-9ad1-e3c682bed492","Type":"ContainerStarted","Data":"4255d701755ee16eefc4f64ff2a1d87789d35c023038a0daf9f7cd0b69fb26a7"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879347 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerStarted","Data":"467db04b7bff5a3b4be9912b3821541f7f7357f38d787b4e261ea72ceb3d15af"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879371 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerStarted","Data":"0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879398 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" event={"ID":"e8194cdc-3133-49e2-9579-a747c0bf2b16","Type":"ContainerStarted","Data":"4f5444c17822db01691b9d03f3dd6a819e814eea7a63f23ec45ece42ea5fba62"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879421 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerStarted","Data":"17079b6bb35f03cd05daf5c195f411f2535030b49cc220f1d1c122f18282a8c6"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879447 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" event={"ID":"302156cc-9dca-4a66-9e6a-ba2c7e738c92","Type":"ContainerStarted","Data":"cf5bd07d44ef1049857af620840ed7780e94db377ae50a689034fcd0589dd325"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879472 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" event={"ID":"57b94ed4-8f0b-4223-bdaf-4316859d8ad3","Type":"ContainerStarted","Data":"d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879494 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" event={"ID":"70d217a9-86b7-47b9-a7da-9ac920b9c7c2","Type":"ContainerStarted","Data":"6b4aa228ac152077a166b064e9b5bf093a0844f95733cd091a0e3bf8ac6b0c9d"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: W0216 21:03:33.879510 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce229d27_837d_4a98_80fc_d56877ae39b8.slice/crio-03ed4454e9c6237b864a1dab6c209256c79b0a72cb535e51a70e7b99d3f0689e WatchSource:0}: Error finding container 03ed4454e9c6237b864a1dab6c209256c79b0a72cb535e51a70e7b99d3f0689e: Status 404 returned error can't find the container with id 03ed4454e9c6237b864a1dab6c209256c79b0a72cb535e51a70e7b99d3f0689e Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879517 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" event={"ID":"2ab0a907-7abe-4808-ba21-bdda1506eae2","Type":"ContainerStarted","Data":"a4e5e42cc4ff83859a8656b165ef7357fe4b7dff02702e6e7921002edc0c6d8d"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879562 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" event={"ID":"e9615af2-cad5-4705-9c2f-6f3c97026100","Type":"ContainerStarted","Data":"43a48a6592fa00c02a3165bc38965569bd23dac45b30b2fdc517303872a72e62"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879588 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" event={"ID":"6b6be6de-6fcc-4f57-b163-fe8f970a01a4","Type":"ContainerStarted","Data":"fe90aa9198533517faa6871ececff317856fe5ccb78abe5de0ace1b89b25d9f3"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879603 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerStarted","Data":"c9124f9d5e41db03a56db8d08da400aa35fdd671c20974a9991273c405896bc3"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879616 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerStarted","Data":"d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93"} Feb 16 21:03:33.879877 master-0 kubenswrapper[7926]: I0216 21:03:33.879378 7926 scope.go:117] "RemoveContainer" containerID="065597b5437e593f0a8e56b505329babf0faf4f1f2e62294ff4f61a62c0f9e9c" Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.879995 7926 scope.go:117] "RemoveContainer" containerID="5b1674388d3a0d8fb07d284207cc23840a32ef17ddc0f1ef774d2188e32d3e84" Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.879630 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" event={"ID":"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b","Type":"ContainerStarted","Data":"073bfd97b3802cf7e422558b7f0d96ac1c7a887d6a785fb5000fa99850a0b06e"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880042 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerStarted","Data":"42d2b8ae4604c72ca108f769893f6589ee95474077ff8dd9cf87399459c2ec53"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880075 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"85337e79dc5b98043d14ed182cca1ddb76f517beb26b734efc337c20a18b289f"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880098 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerStarted","Data":"121dab1fc95eacb58da984bcdc1166fb24200dd1db3a8ef3613a520edb17c265"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880122 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" event={"ID":"c62bb2b4-1469-4e0d-810f-cd6e21ee908a","Type":"ContainerStarted","Data":"d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880141 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerStarted","Data":"5652867e32787e74c02e3d9d28965d504ee7ff6f2fcb9263e330c08c917ac73f"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880158 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" event={"ID":"e7adbe32-b8b9-438e-a2e3-f93146a97424","Type":"ContainerStarted","Data":"b14701382aa95b48c51ea29fa658b5538f88b2a7a4c18fcdfc110d59ae2c79fe"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880179 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"5b1674388d3a0d8fb07d284207cc23840a32ef17ddc0f1ef774d2188e32d3e84"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880197 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerStarted","Data":"63ebdf0c0200865a719bef6bf6aea428a6aed5c1b2a14851e05503627b70b2a7"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880215 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" event={"ID":"aa2e9bbc-3962-45f5-a7cc-2dc059409e70","Type":"ContainerStarted","Data":"d95fdd7082b515ac47df4c4e5100db16158ab71c4fe74d4f5e87ded21ddfd407"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880235 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"0cc0798e5012d359ad3d59e34898cddf8ad150cc9f48b65f4d686bb956001a13"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880304 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerDied","Data":"3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880328 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerDied","Data":"0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880351 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerDied","Data":"d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880372 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerDied","Data":"85337e79dc5b98043d14ed182cca1ddb76f517beb26b734efc337c20a18b289f"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880396 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"0cc0798e5012d359ad3d59e34898cddf8ad150cc9f48b65f4d686bb956001a13"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880418 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerStarted","Data":"bc0c280e8d6f945eb33fad59cb0d8a4aedc8f5ca975f567efb9b9400f3b825d3"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880438 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerStarted","Data":"065597b5437e593f0a8e56b505329babf0faf4f1f2e62294ff4f61a62c0f9e9c"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880456 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" event={"ID":"4085413c-9af1-4d2a-ba0f-33b42025cb7f","Type":"ContainerDied","Data":"ada24a94e3cdaddc38a62024529752b29e1359c42e86c75ebaa514d784cc3fe9"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880478 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"7e0471aa80085ed85cb40c9b3c8ab6f80ea1655f1734a052a840a434c72c54f4"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880496 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" event={"ID":"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3","Type":"ContainerDied","Data":"0c4056212013eaff1f5d405532bbe8e1791cff62d95615157652d9167450664a"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880516 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" event={"ID":"70d217a9-86b7-47b9-a7da-9ac920b9c7c2","Type":"ContainerDied","Data":"6b4aa228ac152077a166b064e9b5bf093a0844f95733cd091a0e3bf8ac6b0c9d"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880537 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e960726eec7f4c030bcd77b5c00f9a27240da71756776e4b20d66b6c394494f7"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880565 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerDied","Data":"121dab1fc95eacb58da984bcdc1166fb24200dd1db3a8ef3613a520edb17c265"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880586 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880602 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880623 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" event={"ID":"6b6be6de-6fcc-4f57-b163-fe8f970a01a4","Type":"ContainerDied","Data":"fe90aa9198533517faa6871ececff317856fe5ccb78abe5de0ace1b89b25d9f3"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880644 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"75d7b146641140c312956826b413c80f7862cac93292ebbdd2b6b13f8e1b06a3"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880694 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerDied","Data":"5652867e32787e74c02e3d9d28965d504ee7ff6f2fcb9263e330c08c917ac73f"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880714 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"da2d8128d877c8e59ec552f44d9719195718721aa40536dc7418200005684242"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880728 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df4705117bc30301536972bb1ddb323a9cf1860379e92028207e9c158a991276"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880743 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerDied","Data":"63ebdf0c0200865a719bef6bf6aea428a6aed5c1b2a14851e05503627b70b2a7"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880765 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880785 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880802 7926 scope.go:117] "RemoveContainer" containerID="5652867e32787e74c02e3d9d28965d504ee7ff6f2fcb9263e330c08c917ac73f" Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: E0216 21:03:33.880202 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.880804 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" event={"ID":"aa2e9bbc-3962-45f5-a7cc-2dc059409e70","Type":"ContainerDied","Data":"d95fdd7082b515ac47df4c4e5100db16158ab71c4fe74d4f5e87ded21ddfd407"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881064 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a339e5c4723737e030c5a03c8395cedd263d3d5213cb12208bfe3004bbd0ef5e"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881087 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerDied","Data":"42d2b8ae4604c72ca108f769893f6589ee95474077ff8dd9cf87399459c2ec53"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881102 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881112 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881123 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" event={"ID":"2ab0a907-7abe-4808-ba21-bdda1506eae2","Type":"ContainerDied","Data":"a4e5e42cc4ff83859a8656b165ef7357fe4b7dff02702e6e7921002edc0c6d8d"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881169 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e76905998b63e1ca06bb636f257a337f36ba01b7d03a406ab7d6fa3bdb3b545"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881182 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerDied","Data":"c9124f9d5e41db03a56db8d08da400aa35fdd671c20974a9991273c405896bc3"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881198 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881207 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881218 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" event={"ID":"e7adbe32-b8b9-438e-a2e3-f93146a97424","Type":"ContainerDied","Data":"b14701382aa95b48c51ea29fa658b5538f88b2a7a4c18fcdfc110d59ae2c79fe"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881231 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34f0b2189e90cc7801c4026c4ab900cc1fc9f5ac2f006e83f5fec81671df191f"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881244 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerDied","Data":"17079b6bb35f03cd05daf5c195f411f2535030b49cc220f1d1c122f18282a8c6"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881255 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881264 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881275 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" event={"ID":"2506c282-0b37-4ece-8a0c-885d0b7f7901","Type":"ContainerDied","Data":"24435a7f63a96b1a49a7d14efbc7fac8f5f69a776a662db4bff0a9f0d5933f6b"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881289 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerDied","Data":"467db04b7bff5a3b4be9912b3821541f7f7357f38d787b4e261ea72ceb3d15af"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881302 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881311 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881321 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerDied","Data":"8d6fd2d30a1b00edfb997113793ad55fbf5dca8c4b949fed22018dbb444c09ad"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881336 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" event={"ID":"9e0227bc-63f5-48be-95dc-1323a2b2e327","Type":"ContainerDied","Data":"a7330b931340d1be5dba0fd54e8b246009c00f6e813142a46ee5264b4ff67461"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881351 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerDied","Data":"065597b5437e593f0a8e56b505329babf0faf4f1f2e62294ff4f61a62c0f9e9c"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881367 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881376 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881386 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerDied","Data":"bc0c280e8d6f945eb33fad59cb0d8a4aedc8f5ca975f567efb9b9400f3b825d3"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881400 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881410 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881420 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" event={"ID":"70d217a9-86b7-47b9-a7da-9ac920b9c7c2","Type":"ContainerStarted","Data":"316bcd2b73e15fab60d8618d92eb77f101f2f53e423adb64b0f374a1f7fcda3a"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881433 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" event={"ID":"2ab0a907-7abe-4808-ba21-bdda1506eae2","Type":"ContainerStarted","Data":"715050d13195531641370ad04c7754b8cef8bb72e0896de25aaafb35a02054c9"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881445 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" event={"ID":"6b6be6de-6fcc-4f57-b163-fe8f970a01a4","Type":"ContainerStarted","Data":"d0e5f8a907c4851af3bce655e141083b0f633fdfa41c5abacbb48a7df33f9e94"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881458 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" event={"ID":"aa2e9bbc-3962-45f5-a7cc-2dc059409e70","Type":"ContainerStarted","Data":"86b2625e01e86e20ad843cc517b662e8d0574773dfe24c22fbbf50abc8c0ea7f"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881472 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" event={"ID":"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3","Type":"ContainerStarted","Data":"11a0f236b15a97d8bb8db30a3ecfba40559eb738b2fbad78fcc9824a0ec8620e"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881487 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" event={"ID":"9e0227bc-63f5-48be-95dc-1323a2b2e327","Type":"ContainerStarted","Data":"f0f2142d7c75b9cb3d050ab9fd78b4ffcf397bc951f0081263a6ec6726c5bac7"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881500 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"6912b1edffde7a78bbdc396546e5278ae133791109c955eb557d3109fd4abd06"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881514 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"1456824a8c7336f75a4d4627de845c133b21a80d97dbb454f452a64a66ca524f"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881528 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"41ef5f9abc41605ba4f43759411cc04f3fe23add167a10d83f8a22bd50eade97"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881543 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881557 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881570 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881582 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881594 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"401699cb53e7098157e808a83125b0e4","Type":"ContainerStarted","Data":"23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881606 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" event={"ID":"4b035e85-b2b0-4dee-bb86-3465fc4b98a8","Type":"ContainerDied","Data":"95cb75164641c9de6a0109a60c606bf650f57a11a7796ffdbcb05ca7aa385e4c"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881621 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" event={"ID":"e9615af2-cad5-4705-9c2f-6f3c97026100","Type":"ContainerDied","Data":"43a48a6592fa00c02a3165bc38965569bd23dac45b30b2fdc517303872a72e62"} Feb 16 21:03:33.881510 master-0 kubenswrapper[7926]: I0216 21:03:33.881635 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd23c2441236e3bdedd04adcd70f26ba2f2b37ed96fb0998ec94c3bbdca5b7da"} Feb 16 21:03:33.887960 master-0 kubenswrapper[7926]: I0216 21:03:33.881663 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerDied","Data":"5b1674388d3a0d8fb07d284207cc23840a32ef17ddc0f1ef774d2188e32d3e84"} Feb 16 21:03:33.887960 master-0 kubenswrapper[7926]: I0216 21:03:33.881676 7926 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b191efabecfa6e89d563189d25950b732d83b54240d68732d9bfb22ddbb8e4f"} Feb 16 21:03:33.887960 master-0 kubenswrapper[7926]: I0216 21:03:33.882189 7926 scope.go:117] "RemoveContainer" containerID="ada24a94e3cdaddc38a62024529752b29e1359c42e86c75ebaa514d784cc3fe9" Feb 16 21:03:33.887960 master-0 kubenswrapper[7926]: I0216 21:03:33.882615 7926 scope.go:117] "RemoveContainer" containerID="24435a7f63a96b1a49a7d14efbc7fac8f5f69a776a662db4bff0a9f0d5933f6b" Feb 16 21:03:33.887960 master-0 kubenswrapper[7926]: I0216 21:03:33.882748 7926 scope.go:117] "RemoveContainer" containerID="95cb75164641c9de6a0109a60c606bf650f57a11a7796ffdbcb05ca7aa385e4c" Feb 16 21:03:33.887960 master-0 kubenswrapper[7926]: W0216 21:03:33.882765 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba294358_051a_4f09_b182_710d3d6778c5.slice/crio-ad196ac4d2e3966bfb26599fb699f9a38a58beb4f2a551485dd0f16fe14d30d3 WatchSource:0}: Error finding container ad196ac4d2e3966bfb26599fb699f9a38a58beb4f2a551485dd0f16fe14d30d3: Status 404 returned error can't find the container with id ad196ac4d2e3966bfb26599fb699f9a38a58beb4f2a551485dd0f16fe14d30d3 Feb 16 21:03:33.887960 master-0 kubenswrapper[7926]: I0216 21:03:33.884115 7926 scope.go:117] "RemoveContainer" containerID="43a48a6592fa00c02a3165bc38965569bd23dac45b30b2fdc517303872a72e62" Feb 16 21:03:33.887960 master-0 kubenswrapper[7926]: E0216 21:03:33.884376 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=insights-operator pod=insights-operator-cb4f7b4cf-h8f7q_openshift-insights(e9615af2-cad5-4705-9c2f-6f3c97026100)\"" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" podUID="e9615af2-cad5-4705-9c2f-6f3c97026100" Feb 16 21:03:33.887960 master-0 kubenswrapper[7926]: W0216 21:03:33.886039 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf275e79f_923c_4d3a_8ed4_084a122ddcf4.slice/crio-8e70ffdd495dcdb270b1f5bf74d98194840c0bb5429461a2cbed334f4538aeec WatchSource:0}: Error finding container 8e70ffdd495dcdb270b1f5bf74d98194840c0bb5429461a2cbed334f4538aeec: Status 404 returned error can't find the container with id 8e70ffdd495dcdb270b1f5bf74d98194840c0bb5429461a2cbed334f4538aeec Feb 16 21:03:33.907876 master-0 kubenswrapper[7926]: I0216 21:03:33.907826 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:33.912746 master-0 kubenswrapper[7926]: I0216 21:03:33.912682 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" podStartSLOduration=308.518283666 podStartE2EDuration="5m33.912660488s" podCreationTimestamp="2026-02-16 20:58:00 +0000 UTC" firstStartedPulling="2026-02-16 20:58:02.904436737 +0000 UTC m=+54.539337037" lastFinishedPulling="2026-02-16 20:58:28.298813559 +0000 UTC m=+79.933713859" observedRunningTime="2026-02-16 21:03:33.910495257 +0000 UTC m=+385.545395587" watchObservedRunningTime="2026-02-16 21:03:33.912660488 +0000 UTC m=+385.547560798" Feb 16 21:03:33.960721 master-0 kubenswrapper[7926]: I0216 21:03:33.960629 7926 scope.go:117] "RemoveContainer" containerID="22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc" Feb 16 21:03:34.092921 master-0 kubenswrapper[7926]: I0216 21:03:34.092832 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 21:03:34.093845 master-0 kubenswrapper[7926]: I0216 21:03:34.093798 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 21:03:34.146862 master-0 kubenswrapper[7926]: I0216 21:03:34.146816 7926 scope.go:117] "RemoveContainer" containerID="b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc" Feb 16 21:03:34.195228 master-0 kubenswrapper[7926]: I0216 21:03:34.195125 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dhh2p" podStartSLOduration=297.003615263 podStartE2EDuration="5m39.195102087s" podCreationTimestamp="2026-02-16 20:57:55 +0000 UTC" firstStartedPulling="2026-02-16 20:57:57.384437582 +0000 UTC m=+49.019337882" lastFinishedPulling="2026-02-16 20:58:39.575924406 +0000 UTC m=+91.210824706" observedRunningTime="2026-02-16 21:03:34.193867623 +0000 UTC m=+385.828767933" watchObservedRunningTime="2026-02-16 21:03:34.195102087 +0000 UTC m=+385.830002397" Feb 16 21:03:34.212804 master-0 kubenswrapper[7926]: I0216 21:03:34.212730 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podStartSLOduration=307.722859193 podStartE2EDuration="5m35.212702772s" podCreationTimestamp="2026-02-16 20:57:59 +0000 UTC" firstStartedPulling="2026-02-16 20:58:00.712713908 +0000 UTC m=+52.347614208" lastFinishedPulling="2026-02-16 20:58:28.202557477 +0000 UTC m=+79.837457787" observedRunningTime="2026-02-16 21:03:34.210974863 +0000 UTC m=+385.845875163" watchObservedRunningTime="2026-02-16 21:03:34.212702772 +0000 UTC m=+385.847603072" Feb 16 21:03:34.245847 master-0 kubenswrapper[7926]: I0216 21:03:34.244004 7926 scope.go:117] "RemoveContainer" containerID="8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669" Feb 16 21:03:34.304985 master-0 kubenswrapper[7926]: I0216 21:03:34.303109 7926 scope.go:117] "RemoveContainer" containerID="0e76905998b63e1ca06bb636f257a337f36ba01b7d03a406ab7d6fa3bdb3b545" Feb 16 21:03:34.314313 master-0 kubenswrapper[7926]: I0216 21:03:34.314248 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:34.352554 master-0 kubenswrapper[7926]: I0216 21:03:34.352460 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" podStartSLOduration=320.435361786 podStartE2EDuration="5m36.352440704s" podCreationTimestamp="2026-02-16 20:57:58 +0000 UTC" firstStartedPulling="2026-02-16 20:58:00.654731631 +0000 UTC m=+52.289631931" lastFinishedPulling="2026-02-16 20:58:16.571810549 +0000 UTC m=+68.206710849" observedRunningTime="2026-02-16 21:03:34.349087511 +0000 UTC m=+385.983987811" watchObservedRunningTime="2026-02-16 21:03:34.352440704 +0000 UTC m=+385.987341004" Feb 16 21:03:34.370676 master-0 kubenswrapper[7926]: I0216 21:03:34.370567 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" podStartSLOduration=333.370545893 podStartE2EDuration="5m33.370545893s" podCreationTimestamp="2026-02-16 20:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:03:34.366474998 +0000 UTC m=+386.001375298" watchObservedRunningTime="2026-02-16 21:03:34.370545893 +0000 UTC m=+386.005446193" Feb 16 21:03:34.444252 master-0 kubenswrapper[7926]: I0216 21:03:34.444219 7926 scope.go:117] "RemoveContainer" containerID="d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93" Feb 16 21:03:34.483692 master-0 kubenswrapper[7926]: I0216 21:03:34.483661 7926 scope.go:117] "RemoveContainer" containerID="c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a" Feb 16 21:03:34.495035 master-0 kubenswrapper[7926]: I0216 21:03:34.494964 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" podStartSLOduration=312.508810528 podStartE2EDuration="5m33.494948945s" podCreationTimestamp="2026-02-16 20:58:01 +0000 UTC" firstStartedPulling="2026-02-16 20:58:07.369795045 +0000 UTC m=+59.004695345" lastFinishedPulling="2026-02-16 20:58:28.355933462 +0000 UTC m=+79.990833762" observedRunningTime="2026-02-16 21:03:34.493203487 +0000 UTC m=+386.128103807" watchObservedRunningTime="2026-02-16 21:03:34.494948945 +0000 UTC m=+386.129849245" Feb 16 21:03:34.520620 master-0 kubenswrapper[7926]: I0216 21:03:34.520571 7926 scope.go:117] "RemoveContainer" containerID="4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221" Feb 16 21:03:34.549720 master-0 kubenswrapper[7926]: I0216 21:03:34.549638 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" podStartSLOduration=328.539030268 podStartE2EDuration="5m31.54961843s" podCreationTimestamp="2026-02-16 20:58:03 +0000 UTC" firstStartedPulling="2026-02-16 20:58:28.295330779 +0000 UTC m=+79.930231079" lastFinishedPulling="2026-02-16 20:58:31.305918941 +0000 UTC m=+82.940819241" observedRunningTime="2026-02-16 21:03:34.548205531 +0000 UTC m=+386.183105841" watchObservedRunningTime="2026-02-16 21:03:34.54961843 +0000 UTC m=+386.184518730" Feb 16 21:03:34.570095 master-0 kubenswrapper[7926]: I0216 21:03:34.570047 7926 scope.go:117] "RemoveContainer" containerID="58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372" Feb 16 21:03:34.586259 master-0 kubenswrapper[7926]: I0216 21:03:34.586155 7926 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:03:34.650078 master-0 kubenswrapper[7926]: I0216 21:03:34.650034 7926 scope.go:117] "RemoveContainer" containerID="47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac" Feb 16 21:03:34.672958 master-0 kubenswrapper[7926]: I0216 21:03:34.672898 7926 scope.go:117] "RemoveContainer" containerID="a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c" Feb 16 21:03:34.716551 master-0 kubenswrapper[7926]: I0216 21:03:34.694893 7926 scope.go:117] "RemoveContainer" containerID="796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09" Feb 16 21:03:34.736551 master-0 kubenswrapper[7926]: I0216 21:03:34.736505 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xv645"] Feb 16 21:03:34.747522 master-0 kubenswrapper[7926]: I0216 21:03:34.747468 7926 scope.go:117] "RemoveContainer" containerID="6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9" Feb 16 21:03:34.765883 master-0 kubenswrapper[7926]: I0216 21:03:34.765820 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a29a1022-5f54-49a2-99f6-d19eb2773890" path="/var/lib/kubelet/pods/a29a1022-5f54-49a2-99f6-d19eb2773890/volumes" Feb 16 21:03:34.766349 master-0 kubenswrapper[7926]: I0216 21:03:34.766326 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xv645"] Feb 16 21:03:34.801599 master-0 kubenswrapper[7926]: I0216 21:03:34.801564 7926 scope.go:117] "RemoveContainer" containerID="b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78" Feb 16 21:03:34.818737 master-0 kubenswrapper[7926]: I0216 21:03:34.818614 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b8vtc" podStartSLOduration=297.617447353 podStartE2EDuration="5m42.818597702s" podCreationTimestamp="2026-02-16 20:57:52 +0000 UTC" firstStartedPulling="2026-02-16 20:57:54.350482872 +0000 UTC m=+45.985383172" lastFinishedPulling="2026-02-16 20:58:39.551633221 +0000 UTC m=+91.186533521" observedRunningTime="2026-02-16 21:03:34.81816746 +0000 UTC m=+386.453067760" watchObservedRunningTime="2026-02-16 21:03:34.818597702 +0000 UTC m=+386.453498002" Feb 16 21:03:34.827887 master-0 kubenswrapper[7926]: I0216 21:03:34.827859 7926 scope.go:117] "RemoveContainer" containerID="f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112" Feb 16 21:03:34.849874 master-0 kubenswrapper[7926]: I0216 21:03:34.849820 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2lj6"] Feb 16 21:03:34.851779 master-0 kubenswrapper[7926]: I0216 21:03:34.851702 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2lj6"] Feb 16 21:03:34.853859 master-0 kubenswrapper[7926]: I0216 21:03:34.853783 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/2.log" Feb 16 21:03:34.853922 master-0 kubenswrapper[7926]: I0216 21:03:34.853898 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerStarted","Data":"98437a21e834f809a7d3a2fcc7ab7ac439c7d9370d526734b7d11f63840cb92d"} Feb 16 21:03:34.856989 master-0 kubenswrapper[7926]: I0216 21:03:34.856963 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/3.log" Feb 16 21:03:34.857360 master-0 kubenswrapper[7926]: I0216 21:03:34.857338 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/2.log" Feb 16 21:03:34.857932 master-0 kubenswrapper[7926]: I0216 21:03:34.857896 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/1.log" Feb 16 21:03:34.857985 master-0 kubenswrapper[7926]: I0216 21:03:34.857955 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerStarted","Data":"abce7c467580f27265b653bd89f53e6e0d6413f3687b039b9f58c8dd18d3f0ce"} Feb 16 21:03:34.861305 master-0 kubenswrapper[7926]: I0216 21:03:34.861264 7926 scope.go:117] "RemoveContainer" containerID="75d7b146641140c312956826b413c80f7862cac93292ebbdd2b6b13f8e1b06a3" Feb 16 21:03:34.865136 master-0 kubenswrapper[7926]: I0216 21:03:34.865095 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-kh4d4_2506c282-0b37-4ece-8a0c-885d0b7f7901/cluster-node-tuning-operator/0.log" Feb 16 21:03:34.865257 master-0 kubenswrapper[7926]: I0216 21:03:34.865190 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" event={"ID":"2506c282-0b37-4ece-8a0c-885d0b7f7901","Type":"ContainerStarted","Data":"c78e5502c7df20a63c6e359691ad6478f7f26c7822d2c31d3780654e26b107fb"} Feb 16 21:03:34.867324 master-0 kubenswrapper[7926]: I0216 21:03:34.867273 7926 generic.go:334] "Generic (PLEG): container finished" podID="ce229d27-837d-4a98-80fc-d56877ae39b8" containerID="4417baf2be8cb2785a3116c10e495e124305a7b9a9021ca81984fe0912c3ccfa" exitCode=0 Feb 16 21:03:34.867424 master-0 kubenswrapper[7926]: I0216 21:03:34.867396 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5kwc" event={"ID":"ce229d27-837d-4a98-80fc-d56877ae39b8","Type":"ContainerDied","Data":"4417baf2be8cb2785a3116c10e495e124305a7b9a9021ca81984fe0912c3ccfa"} Feb 16 21:03:34.867466 master-0 kubenswrapper[7926]: I0216 21:03:34.867437 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5kwc" event={"ID":"ce229d27-837d-4a98-80fc-d56877ae39b8","Type":"ContainerStarted","Data":"03ed4454e9c6237b864a1dab6c209256c79b0a72cb535e51a70e7b99d3f0689e"} Feb 16 21:03:34.872127 master-0 kubenswrapper[7926]: I0216 21:03:34.872093 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/3.log" Feb 16 21:03:34.872230 master-0 kubenswrapper[7926]: I0216 21:03:34.872203 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerStarted","Data":"35ed53f7c30fa9921f8cd975c0172c21b8f110abc5d358e84c90a7ea7b1226a7"} Feb 16 21:03:34.881754 master-0 kubenswrapper[7926]: I0216 21:03:34.881695 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" event={"ID":"4085413c-9af1-4d2a-ba0f-33b42025cb7f","Type":"ContainerStarted","Data":"5bb447e9b562fe2a3fcb45b723cffb38257ea64157f142954fe58414909efdd3"} Feb 16 21:03:34.885495 master-0 kubenswrapper[7926]: I0216 21:03:34.885456 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-9m94g_4b035e85-b2b0-4dee-bb86-3465fc4b98a8/package-server-manager/0.log" Feb 16 21:03:34.886098 master-0 kubenswrapper[7926]: I0216 21:03:34.886059 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" event={"ID":"4b035e85-b2b0-4dee-bb86-3465fc4b98a8","Type":"ContainerStarted","Data":"fa5e5b86ee6d022e914514c6e1b9bc40b0ded23b4d78a78dbc84ca8df5d3a2bd"} Feb 16 21:03:34.886792 master-0 kubenswrapper[7926]: I0216 21:03:34.886753 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:03:34.889911 master-0 kubenswrapper[7926]: I0216 21:03:34.889834 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" podStartSLOduration=309.179348754 podStartE2EDuration="5m36.889812601s" podCreationTimestamp="2026-02-16 20:57:58 +0000 UTC" firstStartedPulling="2026-02-16 20:58:00.655003108 +0000 UTC m=+52.289903398" lastFinishedPulling="2026-02-16 20:58:28.365466945 +0000 UTC m=+80.000367245" observedRunningTime="2026-02-16 21:03:34.888893126 +0000 UTC m=+386.523793426" watchObservedRunningTime="2026-02-16 21:03:34.889812601 +0000 UTC m=+386.524712901" Feb 16 21:03:34.891577 master-0 kubenswrapper[7926]: I0216 21:03:34.891546 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-q5vjl_2ab0a907-7abe-4808-ba21-bdda1506eae2/service-ca-operator/2.log" Feb 16 21:03:34.893946 master-0 kubenswrapper[7926]: I0216 21:03:34.893916 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" event={"ID":"319dc882-e1f5-40f9-99f4-2bae028337e5","Type":"ContainerStarted","Data":"70c8a58b1f436ad8ca4d491de1284ed96c1d17dc7c8758f9d265ebf6a6d73a38"} Feb 16 21:03:34.894029 master-0 kubenswrapper[7926]: I0216 21:03:34.893956 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" event={"ID":"319dc882-e1f5-40f9-99f4-2bae028337e5","Type":"ContainerStarted","Data":"a5c8e6b51575e43d26e0817313f1ec460f29cff6ceb6629a7a5e2f186f585513"} Feb 16 21:03:34.894167 master-0 kubenswrapper[7926]: I0216 21:03:34.894146 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:03:34.897264 master-0 kubenswrapper[7926]: I0216 21:03:34.897231 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerStarted","Data":"b805375f7b42f31b0863c18246ff6bd98c4c77aa1ad1eb2b469a42772d48301d"} Feb 16 21:03:34.899581 master-0 kubenswrapper[7926]: I0216 21:03:34.899534 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/3.log" Feb 16 21:03:34.899752 master-0 kubenswrapper[7926]: I0216 21:03:34.899702 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerStarted","Data":"08b199e651bbf31337e0e421513ddb4e42db3e1be0a3d07452f74ea9c1f46046"} Feb 16 21:03:34.903561 master-0 kubenswrapper[7926]: I0216 21:03:34.902900 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" event={"ID":"ba294358-051a-4f09-b182-710d3d6778c5","Type":"ContainerStarted","Data":"aed7b29fd5a17d326bf662963e39c91ff6d183ab7d2ccddb9bff04832a578f45"} Feb 16 21:03:34.903561 master-0 kubenswrapper[7926]: I0216 21:03:34.902946 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" event={"ID":"ba294358-051a-4f09-b182-710d3d6778c5","Type":"ContainerStarted","Data":"ad196ac4d2e3966bfb26599fb699f9a38a58beb4f2a551485dd0f16fe14d30d3"} Feb 16 21:03:34.905692 master-0 kubenswrapper[7926]: I0216 21:03:34.905665 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/2.log" Feb 16 21:03:34.906330 master-0 kubenswrapper[7926]: I0216 21:03:34.906305 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/1.log" Feb 16 21:03:34.907016 master-0 kubenswrapper[7926]: I0216 21:03:34.906963 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/0.log" Feb 16 21:03:34.907111 master-0 kubenswrapper[7926]: I0216 21:03:34.907059 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerStarted","Data":"9ef3c9bb3006ad6560cc5f0bdef3d88ed02120a2aaa21f57602a6395354cc9ab"} Feb 16 21:03:34.909059 master-0 kubenswrapper[7926]: I0216 21:03:34.909029 7926 generic.go:334] "Generic (PLEG): container finished" podID="f275e79f-923c-4d3a-8ed4-084a122ddcf4" containerID="a976e4b82843842a71c3126eb2ebdd642e517cc73242b40b185d375d47043cde" exitCode=0 Feb 16 21:03:34.909162 master-0 kubenswrapper[7926]: I0216 21:03:34.909105 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn2nh" event={"ID":"f275e79f-923c-4d3a-8ed4-084a122ddcf4","Type":"ContainerDied","Data":"a976e4b82843842a71c3126eb2ebdd642e517cc73242b40b185d375d47043cde"} Feb 16 21:03:34.909162 master-0 kubenswrapper[7926]: I0216 21:03:34.909140 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn2nh" event={"ID":"f275e79f-923c-4d3a-8ed4-084a122ddcf4","Type":"ContainerStarted","Data":"8e70ffdd495dcdb270b1f5bf74d98194840c0bb5429461a2cbed334f4538aeec"} Feb 16 21:03:34.911020 master-0 kubenswrapper[7926]: I0216 21:03:34.910998 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-xzww8_e7adbe32-b8b9-438e-a2e3-f93146a97424/kube-scheduler-operator-container/2.log" Feb 16 21:03:34.911526 master-0 kubenswrapper[7926]: I0216 21:03:34.911404 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" event={"ID":"e7adbe32-b8b9-438e-a2e3-f93146a97424","Type":"ContainerStarted","Data":"6a7d7b13e17869969e9d31d79faa72dfb3a8d8453f67a2323e3dc0a1300a1e65"} Feb 16 21:03:34.913608 master-0 kubenswrapper[7926]: I0216 21:03:34.913584 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/3.log" Feb 16 21:03:34.914043 master-0 kubenswrapper[7926]: I0216 21:03:34.914001 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerStarted","Data":"ac3627020f75f5cd56ecff94b5d8094d6aa1558d6f4f6208d2bc563627046751"} Feb 16 21:03:34.914366 master-0 kubenswrapper[7926]: I0216 21:03:34.914342 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:03:34.916944 master-0 kubenswrapper[7926]: I0216 21:03:34.916338 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/2.log" Feb 16 21:03:34.916944 master-0 kubenswrapper[7926]: I0216 21:03:34.916433 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerStarted","Data":"9b515d5a7a3620fef9281bf66e2c25d3ec90a1c70a0a5cb2470f5419d26f7741"} Feb 16 21:03:34.916944 master-0 kubenswrapper[7926]: I0216 21:03:34.916675 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:03:34.919461 master-0 kubenswrapper[7926]: I0216 21:03:34.919434 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/2.log" Feb 16 21:03:34.919985 master-0 kubenswrapper[7926]: I0216 21:03:34.919954 7926 scope.go:117] "RemoveContainer" containerID="34f0b2189e90cc7801c4026c4ab900cc1fc9f5ac2f006e83f5fec81671df191f" Feb 16 21:03:34.920045 master-0 kubenswrapper[7926]: I0216 21:03:34.920000 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerStarted","Data":"9aebe89f00ace7757c9f12dc1f4359a915f84e8eb395e1cdeae0962c4475a4af"} Feb 16 21:03:34.923479 master-0 kubenswrapper[7926]: I0216 21:03:34.922769 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/3.log" Feb 16 21:03:34.924493 master-0 kubenswrapper[7926]: I0216 21:03:34.924465 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerStarted","Data":"bae2526e4dde061e6c7a8ef722773dcd93504e4ed1b17f4a15386f5a7579875d"} Feb 16 21:03:34.927075 master-0 kubenswrapper[7926]: I0216 21:03:34.927033 7926 scope.go:117] "RemoveContainer" containerID="5b1674388d3a0d8fb07d284207cc23840a32ef17ddc0f1ef774d2188e32d3e84" Feb 16 21:03:34.927285 master-0 kubenswrapper[7926]: E0216 21:03:34.927252 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:03:34.928262 master-0 kubenswrapper[7926]: I0216 21:03:34.928211 7926 scope.go:117] "RemoveContainer" containerID="43a48a6592fa00c02a3165bc38965569bd23dac45b30b2fdc517303872a72e62" Feb 16 21:03:34.928596 master-0 kubenswrapper[7926]: E0216 21:03:34.928551 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=insights-operator pod=insights-operator-cb4f7b4cf-h8f7q_openshift-insights(e9615af2-cad5-4705-9c2f-6f3c97026100)\"" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" podUID="e9615af2-cad5-4705-9c2f-6f3c97026100" Feb 16 21:03:34.948304 master-0 kubenswrapper[7926]: I0216 21:03:34.948268 7926 scope.go:117] "RemoveContainer" containerID="ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9" Feb 16 21:03:34.981681 master-0 kubenswrapper[7926]: I0216 21:03:34.981611 7926 scope.go:117] "RemoveContainer" containerID="ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8" Feb 16 21:03:34.987184 master-0 kubenswrapper[7926]: I0216 21:03:34.987123 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podStartSLOduration=325.176207105 podStartE2EDuration="5m42.987109243s" podCreationTimestamp="2026-02-16 20:57:52 +0000 UTC" firstStartedPulling="2026-02-16 20:57:58.802194561 +0000 UTC m=+50.437094861" lastFinishedPulling="2026-02-16 20:58:16.613096699 +0000 UTC m=+68.247996999" observedRunningTime="2026-02-16 21:03:34.985627492 +0000 UTC m=+386.620527792" watchObservedRunningTime="2026-02-16 21:03:34.987109243 +0000 UTC m=+386.622009543" Feb 16 21:03:35.018250 master-0 kubenswrapper[7926]: I0216 21:03:35.018191 7926 scope.go:117] "RemoveContainer" containerID="31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a" Feb 16 21:03:35.052997 master-0 kubenswrapper[7926]: I0216 21:03:35.052823 7926 scope.go:117] "RemoveContainer" containerID="fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247" Feb 16 21:03:35.096736 master-0 kubenswrapper[7926]: I0216 21:03:35.096689 7926 scope.go:117] "RemoveContainer" containerID="dd23c2441236e3bdedd04adcd70f26ba2f2b37ed96fb0998ec94c3bbdca5b7da" Feb 16 21:03:35.126332 master-0 kubenswrapper[7926]: I0216 21:03:35.126175 7926 scope.go:117] "RemoveContainer" containerID="0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336" Feb 16 21:03:35.153006 master-0 kubenswrapper[7926]: I0216 21:03:35.152948 7926 scope.go:117] "RemoveContainer" containerID="6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c" Feb 16 21:03:35.187954 master-0 kubenswrapper[7926]: I0216 21:03:35.187922 7926 scope.go:117] "RemoveContainer" containerID="da2d8128d877c8e59ec552f44d9719195718721aa40536dc7418200005684242" Feb 16 21:03:35.215673 master-0 kubenswrapper[7926]: I0216 21:03:35.215596 7926 scope.go:117] "RemoveContainer" containerID="df4705117bc30301536972bb1ddb323a9cf1860379e92028207e9c158a991276" Feb 16 21:03:35.241248 master-0 kubenswrapper[7926]: I0216 21:03:35.241199 7926 scope.go:117] "RemoveContainer" containerID="a339e5c4723737e030c5a03c8395cedd263d3d5213cb12208bfe3004bbd0ef5e" Feb 16 21:03:35.264707 master-0 kubenswrapper[7926]: I0216 21:03:35.264672 7926 scope.go:117] "RemoveContainer" containerID="2b191efabecfa6e89d563189d25950b732d83b54240d68732d9bfb22ddbb8e4f" Feb 16 21:03:35.295720 master-0 kubenswrapper[7926]: I0216 21:03:35.295670 7926 scope.go:117] "RemoveContainer" containerID="e960726eec7f4c030bcd77b5c00f9a27240da71756776e4b20d66b6c394494f7" Feb 16 21:03:35.317108 master-0 kubenswrapper[7926]: I0216 21:03:35.317027 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podStartSLOduration=332.317002235 podStartE2EDuration="5m32.317002235s" podCreationTimestamp="2026-02-16 20:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:03:35.316006117 +0000 UTC m=+386.950906417" watchObservedRunningTime="2026-02-16 21:03:35.317002235 +0000 UTC m=+386.951902555" Feb 16 21:03:35.320756 master-0 kubenswrapper[7926]: I0216 21:03:35.320717 7926 scope.go:117] "RemoveContainer" containerID="b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78" Feb 16 21:03:35.321266 master-0 kubenswrapper[7926]: E0216 21:03:35.321218 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78\": container with ID starting with b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78 not found: ID does not exist" containerID="b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78" Feb 16 21:03:35.321325 master-0 kubenswrapper[7926]: I0216 21:03:35.321270 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78"} err="failed to get container status \"b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78\": rpc error: code = NotFound desc = could not find container \"b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78\": container with ID starting with b4a34c89cb81e9504af7117b89a4c5b290e24d0a5142668851022560c4487a78 not found: ID does not exist" Feb 16 21:03:35.321325 master-0 kubenswrapper[7926]: I0216 21:03:35.321297 7926 scope.go:117] "RemoveContainer" containerID="f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112" Feb 16 21:03:35.321702 master-0 kubenswrapper[7926]: E0216 21:03:35.321671 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112\": container with ID starting with f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112 not found: ID does not exist" containerID="f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112" Feb 16 21:03:35.321753 master-0 kubenswrapper[7926]: I0216 21:03:35.321696 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112"} err="failed to get container status \"f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112\": rpc error: code = NotFound desc = could not find container \"f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112\": container with ID starting with f3d4628d5b5ba7e58abaf9e10ff02fc0ec3dcdc6373a3be533d5aa05366f0112 not found: ID does not exist" Feb 16 21:03:35.321753 master-0 kubenswrapper[7926]: I0216 21:03:35.321712 7926 scope.go:117] "RemoveContainer" containerID="ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9" Feb 16 21:03:35.322067 master-0 kubenswrapper[7926]: E0216 21:03:35.321996 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9\": container with ID starting with ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9 not found: ID does not exist" containerID="ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9" Feb 16 21:03:35.322067 master-0 kubenswrapper[7926]: I0216 21:03:35.322018 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9"} err="failed to get container status \"ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9\": rpc error: code = NotFound desc = could not find container \"ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9\": container with ID starting with ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9 not found: ID does not exist" Feb 16 21:03:35.322067 master-0 kubenswrapper[7926]: I0216 21:03:35.322033 7926 scope.go:117] "RemoveContainer" containerID="ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8" Feb 16 21:03:35.322332 master-0 kubenswrapper[7926]: E0216 21:03:35.322284 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8\": container with ID starting with ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8 not found: ID does not exist" containerID="ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8" Feb 16 21:03:35.322385 master-0 kubenswrapper[7926]: I0216 21:03:35.322335 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8"} err="failed to get container status \"ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8\": rpc error: code = NotFound desc = could not find container \"ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8\": container with ID starting with ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8 not found: ID does not exist" Feb 16 21:03:35.322385 master-0 kubenswrapper[7926]: I0216 21:03:35.322364 7926 scope.go:117] "RemoveContainer" containerID="796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09" Feb 16 21:03:35.322766 master-0 kubenswrapper[7926]: E0216 21:03:35.322689 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09\": container with ID starting with 796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09 not found: ID does not exist" containerID="796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09" Feb 16 21:03:35.322766 master-0 kubenswrapper[7926]: I0216 21:03:35.322733 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09"} err="failed to get container status \"796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09\": rpc error: code = NotFound desc = could not find container \"796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09\": container with ID starting with 796cedcccf27a70c4b1fc5e0f9d34776e57cab5bcbac808a8a55396fa052ee09 not found: ID does not exist" Feb 16 21:03:35.322766 master-0 kubenswrapper[7926]: I0216 21:03:35.322760 7926 scope.go:117] "RemoveContainer" containerID="6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9" Feb 16 21:03:35.323116 master-0 kubenswrapper[7926]: E0216 21:03:35.323077 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9\": container with ID starting with 6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9 not found: ID does not exist" containerID="6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9" Feb 16 21:03:35.323174 master-0 kubenswrapper[7926]: I0216 21:03:35.323110 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9"} err="failed to get container status \"6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9\": rpc error: code = NotFound desc = could not find container \"6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9\": container with ID starting with 6c789ad424d6da26da31c06317afc3ff04d13db41b3d9ada1b99dd43bd4685c9 not found: ID does not exist" Feb 16 21:03:35.323174 master-0 kubenswrapper[7926]: I0216 21:03:35.323138 7926 scope.go:117] "RemoveContainer" containerID="b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc" Feb 16 21:03:35.323830 master-0 kubenswrapper[7926]: E0216 21:03:35.323796 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc\": container with ID starting with b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc not found: ID does not exist" containerID="b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc" Feb 16 21:03:35.323830 master-0 kubenswrapper[7926]: I0216 21:03:35.323823 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc"} err="failed to get container status \"b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc\": rpc error: code = NotFound desc = could not find container \"b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc\": container with ID starting with b91768f3b3b77b8f39dbc687f48f7d020363ab1760dd10d66f66b996778bf8dc not found: ID does not exist" Feb 16 21:03:35.323948 master-0 kubenswrapper[7926]: I0216 21:03:35.323840 7926 scope.go:117] "RemoveContainer" containerID="8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669" Feb 16 21:03:35.324152 master-0 kubenswrapper[7926]: E0216 21:03:35.324114 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669\": container with ID starting with 8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669 not found: ID does not exist" containerID="8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669" Feb 16 21:03:35.324201 master-0 kubenswrapper[7926]: I0216 21:03:35.324150 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669"} err="failed to get container status \"8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669\": rpc error: code = NotFound desc = could not find container \"8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669\": container with ID starting with 8f381e0ba80bb61f122cb6f8dc6fbf0f4de7cc56a19bdf606299e77668a6c669 not found: ID does not exist" Feb 16 21:03:35.324201 master-0 kubenswrapper[7926]: I0216 21:03:35.324178 7926 scope.go:117] "RemoveContainer" containerID="4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221" Feb 16 21:03:35.324539 master-0 kubenswrapper[7926]: E0216 21:03:35.324504 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221\": container with ID starting with 4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221 not found: ID does not exist" containerID="4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221" Feb 16 21:03:35.324539 master-0 kubenswrapper[7926]: I0216 21:03:35.324531 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221"} err="failed to get container status \"4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221\": rpc error: code = NotFound desc = could not find container \"4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221\": container with ID starting with 4765f14761690375464a0e714d58564cbd8daae8b93a35914f1d74b0169d6221 not found: ID does not exist" Feb 16 21:03:35.324636 master-0 kubenswrapper[7926]: I0216 21:03:35.324546 7926 scope.go:117] "RemoveContainer" containerID="58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372" Feb 16 21:03:35.324933 master-0 kubenswrapper[7926]: E0216 21:03:35.324874 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372\": container with ID starting with 58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372 not found: ID does not exist" containerID="58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372" Feb 16 21:03:35.324979 master-0 kubenswrapper[7926]: I0216 21:03:35.324932 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372"} err="failed to get container status \"58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372\": rpc error: code = NotFound desc = could not find container \"58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372\": container with ID starting with 58d545a4271a615d484834ce5f2e4aae18f89163dd820abd13282ebc492d6372 not found: ID does not exist" Feb 16 21:03:35.324979 master-0 kubenswrapper[7926]: I0216 21:03:35.324953 7926 scope.go:117] "RemoveContainer" containerID="47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac" Feb 16 21:03:35.325316 master-0 kubenswrapper[7926]: E0216 21:03:35.325286 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac\": container with ID starting with 47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac not found: ID does not exist" containerID="47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac" Feb 16 21:03:35.325362 master-0 kubenswrapper[7926]: I0216 21:03:35.325311 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac"} err="failed to get container status \"47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac\": rpc error: code = NotFound desc = could not find container \"47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac\": container with ID starting with 47b2c5bac29b78fe7840fe916226c42b6c6d9d0126d96d3a74bd63abd7b0a9ac not found: ID does not exist" Feb 16 21:03:35.325362 master-0 kubenswrapper[7926]: I0216 21:03:35.325327 7926 scope.go:117] "RemoveContainer" containerID="a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c" Feb 16 21:03:35.326145 master-0 kubenswrapper[7926]: E0216 21:03:35.326102 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c\": container with ID starting with a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c not found: ID does not exist" containerID="a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c" Feb 16 21:03:35.326209 master-0 kubenswrapper[7926]: I0216 21:03:35.326144 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c"} err="failed to get container status \"a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c\": rpc error: code = NotFound desc = could not find container \"a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c\": container with ID starting with a773bd017f0bba4a3a74bfe52982d094692dcc11d0231ea1c51b561373a69c1c not found: ID does not exist" Feb 16 21:03:35.326209 master-0 kubenswrapper[7926]: I0216 21:03:35.326167 7926 scope.go:117] "RemoveContainer" containerID="335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9" Feb 16 21:03:35.326531 master-0 kubenswrapper[7926]: E0216 21:03:35.326495 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9\": container with ID starting with 335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9 not found: ID does not exist" containerID="335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9" Feb 16 21:03:35.326587 master-0 kubenswrapper[7926]: I0216 21:03:35.326525 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9"} err="failed to get container status \"335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9\": rpc error: code = NotFound desc = could not find container \"335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9\": container with ID starting with 335a1a7f7a9fe31928e784a1b8c27628b0095f9bd1bb4c356dc580de874df2a9 not found: ID does not exist" Feb 16 21:03:35.326587 master-0 kubenswrapper[7926]: I0216 21:03:35.326544 7926 scope.go:117] "RemoveContainer" containerID="22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc" Feb 16 21:03:35.327142 master-0 kubenswrapper[7926]: E0216 21:03:35.327098 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc\": container with ID starting with 22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc not found: ID does not exist" containerID="22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc" Feb 16 21:03:35.327197 master-0 kubenswrapper[7926]: I0216 21:03:35.327139 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc"} err="failed to get container status \"22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc\": rpc error: code = NotFound desc = could not find container \"22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc\": container with ID starting with 22ac853b44d567411363f432db892ab502ff1733ca2ac03896be62f2c9a7c4fc not found: ID does not exist" Feb 16 21:03:35.327197 master-0 kubenswrapper[7926]: I0216 21:03:35.327163 7926 scope.go:117] "RemoveContainer" containerID="0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336" Feb 16 21:03:35.327518 master-0 kubenswrapper[7926]: E0216 21:03:35.327475 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336\": container with ID starting with 0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336 not found: ID does not exist" containerID="0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336" Feb 16 21:03:35.327564 master-0 kubenswrapper[7926]: I0216 21:03:35.327519 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336"} err="failed to get container status \"0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336\": rpc error: code = NotFound desc = could not find container \"0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336\": container with ID starting with 0471cbeac2299e0d9e3ce431cd7a2e4e9d02003bf2fa34b26aead6cb07fac336 not found: ID does not exist" Feb 16 21:03:35.327564 master-0 kubenswrapper[7926]: I0216 21:03:35.327543 7926 scope.go:117] "RemoveContainer" containerID="6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c" Feb 16 21:03:35.327923 master-0 kubenswrapper[7926]: E0216 21:03:35.327878 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c\": container with ID starting with 6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c not found: ID does not exist" containerID="6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c" Feb 16 21:03:35.327970 master-0 kubenswrapper[7926]: I0216 21:03:35.327927 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c"} err="failed to get container status \"6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c\": rpc error: code = NotFound desc = could not find container \"6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c\": container with ID starting with 6604687382d89a09dac220e4bde6c4ee9334bbf7429cff3764175c9050a1853c not found: ID does not exist" Feb 16 21:03:35.327970 master-0 kubenswrapper[7926]: I0216 21:03:35.327958 7926 scope.go:117] "RemoveContainer" containerID="d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93" Feb 16 21:03:35.328434 master-0 kubenswrapper[7926]: E0216 21:03:35.328378 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93\": container with ID starting with d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93 not found: ID does not exist" containerID="d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93" Feb 16 21:03:35.328490 master-0 kubenswrapper[7926]: I0216 21:03:35.328410 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93"} err="failed to get container status \"d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93\": rpc error: code = NotFound desc = could not find container \"d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93\": container with ID starting with d1bc5bc3b429e39609506c1bed3cc8e8c06f4002e3b95ecbfe86ba10e124ab93 not found: ID does not exist" Feb 16 21:03:35.328490 master-0 kubenswrapper[7926]: I0216 21:03:35.328462 7926 scope.go:117] "RemoveContainer" containerID="c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a" Feb 16 21:03:35.328821 master-0 kubenswrapper[7926]: E0216 21:03:35.328788 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a\": container with ID starting with c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a not found: ID does not exist" containerID="c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a" Feb 16 21:03:35.328870 master-0 kubenswrapper[7926]: I0216 21:03:35.328815 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a"} err="failed to get container status \"c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a\": rpc error: code = NotFound desc = could not find container \"c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a\": container with ID starting with c01a97aeea491e06b4f6bd168a545331d557799591733b3afb1c1070b9661f2a not found: ID does not exist" Feb 16 21:03:35.328870 master-0 kubenswrapper[7926]: I0216 21:03:35.328834 7926 scope.go:117] "RemoveContainer" containerID="ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9" Feb 16 21:03:35.329208 master-0 kubenswrapper[7926]: I0216 21:03:35.329173 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9"} err="failed to get container status \"ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9\": rpc error: code = NotFound desc = could not find container \"ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9\": container with ID starting with ea274ca75c9480032670a52e0f8060808dc2b8ae8a9455bb06740d96dc246ff9 not found: ID does not exist" Feb 16 21:03:35.329208 master-0 kubenswrapper[7926]: I0216 21:03:35.329199 7926 scope.go:117] "RemoveContainer" containerID="ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8" Feb 16 21:03:35.329557 master-0 kubenswrapper[7926]: I0216 21:03:35.329518 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8"} err="failed to get container status \"ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8\": rpc error: code = NotFound desc = could not find container \"ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8\": container with ID starting with ae31b292d6ba5f8d78f8793a9865c571a66292e65886b99ff37b242383c1ffb8 not found: ID does not exist" Feb 16 21:03:35.329557 master-0 kubenswrapper[7926]: I0216 21:03:35.329546 7926 scope.go:117] "RemoveContainer" containerID="31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a" Feb 16 21:03:35.329913 master-0 kubenswrapper[7926]: E0216 21:03:35.329879 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a\": container with ID starting with 31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a not found: ID does not exist" containerID="31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a" Feb 16 21:03:35.329960 master-0 kubenswrapper[7926]: I0216 21:03:35.329908 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a"} err="failed to get container status \"31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a\": rpc error: code = NotFound desc = could not find container \"31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a\": container with ID starting with 31e55b139c998e23cbf2bc02e2f79638ed2388ee42133c4387d01234b192dc1a not found: ID does not exist" Feb 16 21:03:35.329960 master-0 kubenswrapper[7926]: I0216 21:03:35.329927 7926 scope.go:117] "RemoveContainer" containerID="fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247" Feb 16 21:03:35.330291 master-0 kubenswrapper[7926]: E0216 21:03:35.330253 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247\": container with ID starting with fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247 not found: ID does not exist" containerID="fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247" Feb 16 21:03:35.330336 master-0 kubenswrapper[7926]: I0216 21:03:35.330286 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247"} err="failed to get container status \"fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247\": rpc error: code = NotFound desc = could not find container \"fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247\": container with ID starting with fc88dd28d8567cb614f787ef77e43ceb61a79e3dffda24d95403e277882bb247 not found: ID does not exist" Feb 16 21:03:35.895153 master-0 kubenswrapper[7926]: I0216 21:03:35.895081 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:03:35.895411 master-0 kubenswrapper[7926]: I0216 21:03:35.895174 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:35.917728 master-0 kubenswrapper[7926]: I0216 21:03:35.917640 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:03:35.918076 master-0 kubenswrapper[7926]: I0216 21:03:35.917757 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:35.932669 master-0 kubenswrapper[7926]: I0216 21:03:35.932485 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/2.log" Feb 16 21:03:35.936893 master-0 kubenswrapper[7926]: I0216 21:03:35.936861 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/1.log" Feb 16 21:03:35.943971 master-0 kubenswrapper[7926]: I0216 21:03:35.943937 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-8cllz_70d217a9-86b7-47b9-a7da-9ac920b9c7c2/etcd-operator/2.log" Feb 16 21:03:35.947938 master-0 kubenswrapper[7926]: I0216 21:03:35.947915 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-xzww8_e7adbe32-b8b9-438e-a2e3-f93146a97424/kube-scheduler-operator-container/2.log" Feb 16 21:03:35.951175 master-0 kubenswrapper[7926]: I0216 21:03:35.951131 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/3.log" Feb 16 21:03:35.954286 master-0 kubenswrapper[7926]: I0216 21:03:35.954265 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/1.log" Feb 16 21:03:35.956797 master-0 kubenswrapper[7926]: I0216 21:03:35.956769 7926 scope.go:117] "RemoveContainer" containerID="5b1674388d3a0d8fb07d284207cc23840a32ef17ddc0f1ef774d2188e32d3e84" Feb 16 21:03:35.957133 master-0 kubenswrapper[7926]: E0216 21:03:35.957107 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:03:35.959533 master-0 kubenswrapper[7926]: I0216 21:03:35.959510 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5kwc" event={"ID":"ce229d27-837d-4a98-80fc-d56877ae39b8","Type":"ContainerStarted","Data":"88247333b19116719c02e3337d53469a84d7c4cf04c7843a9226ea683ea58eef"} Feb 16 21:03:35.964599 master-0 kubenswrapper[7926]: I0216 21:03:35.964574 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/2.log" Feb 16 21:03:35.967946 master-0 kubenswrapper[7926]: I0216 21:03:35.967916 7926 generic.go:334] "Generic (PLEG): container finished" podID="f275e79f-923c-4d3a-8ed4-084a122ddcf4" containerID="8e09cadaa280b2142d1e553cf5915c3779b8daaeed82dcb8adbf18accee60298" exitCode=0 Feb 16 21:03:35.968097 master-0 kubenswrapper[7926]: I0216 21:03:35.968076 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn2nh" event={"ID":"f275e79f-923c-4d3a-8ed4-084a122ddcf4","Type":"ContainerDied","Data":"8e09cadaa280b2142d1e553cf5915c3779b8daaeed82dcb8adbf18accee60298"} Feb 16 21:03:35.971474 master-0 kubenswrapper[7926]: I0216 21:03:35.971126 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/3.log" Feb 16 21:03:35.973549 master-0 kubenswrapper[7926]: I0216 21:03:35.973481 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-tvzdw_6b6be6de-6fcc-4f57-b163-fe8f970a01a4/openshift-apiserver-operator/2.log" Feb 16 21:03:35.980643 master-0 kubenswrapper[7926]: I0216 21:03:35.980542 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:36.710782 master-0 kubenswrapper[7926]: I0216 21:03:36.710639 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:36.747007 master-0 kubenswrapper[7926]: I0216 21:03:36.746949 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ec2c8c-e32c-4d18-ad78-0ef1f19557af" path="/var/lib/kubelet/pods/97ec2c8c-e32c-4d18-ad78-0ef1f19557af/volumes" Feb 16 21:03:36.747619 master-0 kubenswrapper[7926]: I0216 21:03:36.747597 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a6dcba-776f-48ba-b824-90ed5ae3abee" path="/var/lib/kubelet/pods/d4a6dcba-776f-48ba-b824-90ed5ae3abee/volumes" Feb 16 21:03:36.974744 master-0 kubenswrapper[7926]: I0216 21:03:36.974575 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:03:36.974744 master-0 kubenswrapper[7926]: I0216 21:03:36.974699 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:36.976238 master-0 kubenswrapper[7926]: I0216 21:03:36.976197 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:03:36.976328 master-0 kubenswrapper[7926]: I0216 21:03:36.976243 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:36.986384 master-0 kubenswrapper[7926]: I0216 21:03:36.986330 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn2nh" event={"ID":"f275e79f-923c-4d3a-8ed4-084a122ddcf4","Type":"ContainerStarted","Data":"174ef56d9a0731e870098210dbe7db94e7668c7396c38469aae8bfc88af93da5"} Feb 16 21:03:36.991713 master-0 kubenswrapper[7926]: I0216 21:03:36.991059 7926 generic.go:334] "Generic (PLEG): container finished" podID="ce229d27-837d-4a98-80fc-d56877ae39b8" containerID="88247333b19116719c02e3337d53469a84d7c4cf04c7843a9226ea683ea58eef" exitCode=0 Feb 16 21:03:36.991713 master-0 kubenswrapper[7926]: I0216 21:03:36.991159 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5kwc" event={"ID":"ce229d27-837d-4a98-80fc-d56877ae39b8","Type":"ContainerDied","Data":"88247333b19116719c02e3337d53469a84d7c4cf04c7843a9226ea683ea58eef"} Feb 16 21:03:37.011325 master-0 kubenswrapper[7926]: I0216 21:03:37.010578 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sn2nh" podStartSLOduration=331.544464271 podStartE2EDuration="5m33.010471209s" podCreationTimestamp="2026-02-16 20:58:04 +0000 UTC" firstStartedPulling="2026-02-16 21:03:34.919691161 +0000 UTC m=+386.554591461" lastFinishedPulling="2026-02-16 21:03:36.385698099 +0000 UTC m=+388.020598399" observedRunningTime="2026-02-16 21:03:37.008092602 +0000 UTC m=+388.642992902" watchObservedRunningTime="2026-02-16 21:03:37.010471209 +0000 UTC m=+388.645371509" Feb 16 21:03:37.314427 master-0 kubenswrapper[7926]: I0216 21:03:37.314066 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:37.966024 master-0 kubenswrapper[7926]: I0216 21:03:37.965610 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 16 21:03:37.966024 master-0 kubenswrapper[7926]: I0216 21:03:37.965691 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 16 21:03:37.987221 master-0 kubenswrapper[7926]: I0216 21:03:37.987077 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 16 21:03:38.002945 master-0 kubenswrapper[7926]: I0216 21:03:38.002239 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5kwc" event={"ID":"ce229d27-837d-4a98-80fc-d56877ae39b8","Type":"ContainerStarted","Data":"c1f8fde78fdcda9989a4c5f1c082c78ebc7c4aa51b02befd18293e11e9bd341a"} Feb 16 21:03:38.027100 master-0 kubenswrapper[7926]: I0216 21:03:38.027023 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j5kwc" podStartSLOduration=332.497525212 podStartE2EDuration="5m35.027009048s" podCreationTimestamp="2026-02-16 20:58:03 +0000 UTC" firstStartedPulling="2026-02-16 21:03:34.869992695 +0000 UTC m=+386.504892995" lastFinishedPulling="2026-02-16 21:03:37.399476511 +0000 UTC m=+389.034376831" observedRunningTime="2026-02-16 21:03:38.023813919 +0000 UTC m=+389.658714259" watchObservedRunningTime="2026-02-16 21:03:38.027009048 +0000 UTC m=+389.661909348" Feb 16 21:03:38.094415 master-0 kubenswrapper[7926]: I0216 21:03:38.094260 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:03:38.094415 master-0 kubenswrapper[7926]: I0216 21:03:38.094344 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:38.163830 master-0 kubenswrapper[7926]: I0216 21:03:38.163765 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:38.164052 master-0 kubenswrapper[7926]: I0216 21:03:38.163836 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:39.099162 master-0 kubenswrapper[7926]: I0216 21:03:39.099081 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:39.099721 master-0 kubenswrapper[7926]: I0216 21:03:39.099168 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:39.711733 master-0 kubenswrapper[7926]: I0216 21:03:39.711682 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:40.195970 master-0 kubenswrapper[7926]: I0216 21:03:40.195904 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="6912b1edffde7a78bbdc396546e5278ae133791109c955eb557d3109fd4abd06" exitCode=255 Feb 16 21:03:40.195970 master-0 kubenswrapper[7926]: I0216 21:03:40.195953 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"6912b1edffde7a78bbdc396546e5278ae133791109c955eb557d3109fd4abd06"} Feb 16 21:03:40.197979 master-0 kubenswrapper[7926]: I0216 21:03:40.195992 7926 scope.go:117] "RemoveContainer" containerID="7e0471aa80085ed85cb40c9b3c8ab6f80ea1655f1734a052a840a434c72c54f4" Feb 16 21:03:40.197979 master-0 kubenswrapper[7926]: I0216 21:03:40.197114 7926 scope.go:117] "RemoveContainer" containerID="6912b1edffde7a78bbdc396546e5278ae133791109c955eb557d3109fd4abd06" Feb 16 21:03:40.197979 master-0 kubenswrapper[7926]: E0216 21:03:40.197907 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-policy-controller pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:03:41.032738 master-0 kubenswrapper[7926]: E0216 21:03:41.031498 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:41.163795 master-0 kubenswrapper[7926]: I0216 21:03:41.163731 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:41.164025 master-0 kubenswrapper[7926]: I0216 21:03:41.163821 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:42.098632 master-0 kubenswrapper[7926]: I0216 21:03:42.098550 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:42.098632 master-0 kubenswrapper[7926]: I0216 21:03:42.098624 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:42.209876 master-0 kubenswrapper[7926]: I0216 21:03:42.209772 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" event={"ID":"ba294358-051a-4f09-b182-710d3d6778c5","Type":"ContainerStarted","Data":"c7880afa219acb0ac5e4138682f8fc8b3e3931790fad2a804808d6e2f5933f3f"} Feb 16 21:03:42.237772 master-0 kubenswrapper[7926]: I0216 21:03:42.237617 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" podStartSLOduration=332.347790469 podStartE2EDuration="5m39.23759157s" podCreationTimestamp="2026-02-16 20:58:03 +0000 UTC" firstStartedPulling="2026-02-16 21:03:34.586085834 +0000 UTC m=+386.220986134" lastFinishedPulling="2026-02-16 21:03:41.475886935 +0000 UTC m=+393.110787235" observedRunningTime="2026-02-16 21:03:42.23227759 +0000 UTC m=+393.867177940" watchObservedRunningTime="2026-02-16 21:03:42.23759157 +0000 UTC m=+393.872491900" Feb 16 21:03:42.978958 master-0 kubenswrapper[7926]: I0216 21:03:42.978896 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 16 21:03:43.693377 master-0 kubenswrapper[7926]: I0216 21:03:43.693267 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:03:43.693377 master-0 kubenswrapper[7926]: I0216 21:03:43.693358 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:03:43.740588 master-0 kubenswrapper[7926]: I0216 21:03:43.740505 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:03:44.163605 master-0 kubenswrapper[7926]: I0216 21:03:44.163503 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:44.163605 master-0 kubenswrapper[7926]: I0216 21:03:44.163585 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:44.164016 master-0 kubenswrapper[7926]: I0216 21:03:44.163646 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:03:44.164515 master-0 kubenswrapper[7926]: I0216 21:03:44.164404 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:44.164618 master-0 kubenswrapper[7926]: I0216 21:03:44.164512 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:44.164618 master-0 kubenswrapper[7926]: I0216 21:03:44.164452 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"ac3627020f75f5cd56ecff94b5d8094d6aa1558d6f4f6208d2bc563627046751"} pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 16 21:03:44.164618 master-0 kubenswrapper[7926]: I0216 21:03:44.164594 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" containerID="cri-o://ac3627020f75f5cd56ecff94b5d8094d6aa1558d6f4f6208d2bc563627046751" gracePeriod=30 Feb 16 21:03:44.300204 master-0 kubenswrapper[7926]: I0216 21:03:44.300125 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:03:44.314058 master-0 kubenswrapper[7926]: I0216 21:03:44.313843 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:44.314648 master-0 kubenswrapper[7926]: I0216 21:03:44.314616 7926 scope.go:117] "RemoveContainer" containerID="6912b1edffde7a78bbdc396546e5278ae133791109c955eb557d3109fd4abd06" Feb 16 21:03:44.315411 master-0 kubenswrapper[7926]: E0216 21:03:44.314901 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-policy-controller pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:03:44.525904 master-0 kubenswrapper[7926]: I0216 21:03:44.525852 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:03:44.525904 master-0 kubenswrapper[7926]: I0216 21:03:44.525910 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:03:44.576682 master-0 kubenswrapper[7926]: I0216 21:03:44.576622 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:03:44.961717 master-0 kubenswrapper[7926]: I0216 21:03:44.961526 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:03:44.961717 master-0 kubenswrapper[7926]: I0216 21:03:44.961631 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:44.962854 master-0 kubenswrapper[7926]: I0216 21:03:44.961825 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:03:44.962854 master-0 kubenswrapper[7926]: I0216 21:03:44.961986 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:45.005771 master-0 kubenswrapper[7926]: E0216 21:03:45.005467 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Feb 16 21:03:45.005771 master-0 kubenswrapper[7926]: &Event{ObjectMeta:{authentication-operator-755d954778-8gnq5.1894d5b059b49116 openshift-authentication-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication-operator,Name:authentication-operator-755d954778-8gnq5,UID:27c20f63-9bfb-4703-94d5-0c65475e08d1,APIVersion:v1,ResourceVersion:3727,FieldPath:spec.containers{authentication-operator},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.128.0.15:8443/healthz": dial tcp 10.128.0.15:8443: connect: connection refused Feb 16 21:03:45.005771 master-0 kubenswrapper[7926]: body: Feb 16 21:03:45.005771 master-0 kubenswrapper[7926]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:26.859413782 +0000 UTC m=+78.494314082,LastTimestamp:2026-02-16 20:58:26.859413782 +0000 UTC m=+78.494314082,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 16 21:03:45.005771 master-0 kubenswrapper[7926]: > Feb 16 21:03:45.099427 master-0 kubenswrapper[7926]: I0216 21:03:45.099170 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:45.099427 master-0 kubenswrapper[7926]: I0216 21:03:45.099270 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:45.243253 master-0 kubenswrapper[7926]: I0216 21:03:45.243141 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/4.log" Feb 16 21:03:45.243944 master-0 kubenswrapper[7926]: I0216 21:03:45.243872 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/3.log" Feb 16 21:03:45.244446 master-0 kubenswrapper[7926]: I0216 21:03:45.244369 7926 generic.go:334] "Generic (PLEG): container finished" podID="59237aa6-6250-4619-8ee5-abae59f04b57" containerID="ac3627020f75f5cd56ecff94b5d8094d6aa1558d6f4f6208d2bc563627046751" exitCode=255 Feb 16 21:03:45.244564 master-0 kubenswrapper[7926]: I0216 21:03:45.244470 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerDied","Data":"ac3627020f75f5cd56ecff94b5d8094d6aa1558d6f4f6208d2bc563627046751"} Feb 16 21:03:45.244564 master-0 kubenswrapper[7926]: I0216 21:03:45.244543 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerStarted","Data":"a0f4bf116c475ac57080c53e7f9652de2e9cdcb6db7cccf87a7d3deeef5a1385"} Feb 16 21:03:45.244743 master-0 kubenswrapper[7926]: I0216 21:03:45.244569 7926 scope.go:117] "RemoveContainer" containerID="17079b6bb35f03cd05daf5c195f411f2535030b49cc220f1d1c122f18282a8c6" Feb 16 21:03:45.245841 master-0 kubenswrapper[7926]: I0216 21:03:45.245756 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:45.245988 master-0 kubenswrapper[7926]: I0216 21:03:45.245855 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:45.281484 master-0 kubenswrapper[7926]: I0216 21:03:45.281412 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:03:46.252552 master-0 kubenswrapper[7926]: I0216 21:03:46.252506 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/4.log" Feb 16 21:03:46.478504 master-0 kubenswrapper[7926]: E0216 21:03:46.478407 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:03:46.493576 master-0 kubenswrapper[7926]: I0216 21:03:46.493427 7926 patch_prober.go:28] interesting pod/etcd-operator-67bf55ccdd-8cllz container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Feb 16 21:03:46.493576 master-0 kubenswrapper[7926]: I0216 21:03:46.493488 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" podUID="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Feb 16 21:03:46.890978 master-0 kubenswrapper[7926]: E0216 21:03:46.890878 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 21:03:47.163757 master-0 kubenswrapper[7926]: I0216 21:03:47.163523 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:47.163757 master-0 kubenswrapper[7926]: I0216 21:03:47.163610 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:47.734682 master-0 kubenswrapper[7926]: I0216 21:03:47.734546 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:47.735626 master-0 kubenswrapper[7926]: I0216 21:03:47.735572 7926 scope.go:117] "RemoveContainer" containerID="6912b1edffde7a78bbdc396546e5278ae133791109c955eb557d3109fd4abd06" Feb 16 21:03:47.735958 master-0 kubenswrapper[7926]: E0216 21:03:47.735909 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-policy-controller pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:03:47.738525 master-0 kubenswrapper[7926]: I0216 21:03:47.738460 7926 scope.go:117] "RemoveContainer" containerID="43a48a6592fa00c02a3165bc38965569bd23dac45b30b2fdc517303872a72e62" Feb 16 21:03:48.094108 master-0 kubenswrapper[7926]: I0216 21:03:48.093941 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:03:48.094472 master-0 kubenswrapper[7926]: I0216 21:03:48.094152 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:48.098260 master-0 kubenswrapper[7926]: I0216 21:03:48.098203 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:03:48.098838 master-0 kubenswrapper[7926]: I0216 21:03:48.098776 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:48.098928 master-0 kubenswrapper[7926]: I0216 21:03:48.098850 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:48.099510 master-0 kubenswrapper[7926]: I0216 21:03:48.099447 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:48.099586 master-0 kubenswrapper[7926]: I0216 21:03:48.099536 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:48.270374 master-0 kubenswrapper[7926]: I0216 21:03:48.270286 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" event={"ID":"e9615af2-cad5-4705-9c2f-6f3c97026100","Type":"ContainerStarted","Data":"731bee714e1ed342758024ac0402e898ea440d14a35645160149416223c075e2"} Feb 16 21:03:49.711203 master-0 kubenswrapper[7926]: I0216 21:03:49.711063 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:50.164069 master-0 kubenswrapper[7926]: I0216 21:03:50.163990 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:50.164069 master-0 kubenswrapper[7926]: I0216 21:03:50.164053 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:50.738423 master-0 kubenswrapper[7926]: I0216 21:03:50.738318 7926 scope.go:117] "RemoveContainer" containerID="5b1674388d3a0d8fb07d284207cc23840a32ef17ddc0f1ef774d2188e32d3e84" Feb 16 21:03:51.032901 master-0 kubenswrapper[7926]: E0216 21:03:51.032815 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:51.099181 master-0 kubenswrapper[7926]: I0216 21:03:51.099093 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:51.099181 master-0 kubenswrapper[7926]: I0216 21:03:51.099153 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:51.292528 master-0 kubenswrapper[7926]: I0216 21:03:51.292323 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/1.log" Feb 16 21:03:51.293091 master-0 kubenswrapper[7926]: I0216 21:03:51.293018 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"50720d9ad3b3ea70d85acc6454761164cbe913fb0f9ca263fc8b50f0bd5f848c"} Feb 16 21:03:51.319301 master-0 kubenswrapper[7926]: I0216 21:03:51.319178 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" podStartSLOduration=329.371501457 podStartE2EDuration="5m50.319154314s" podCreationTimestamp="2026-02-16 20:58:01 +0000 UTC" firstStartedPulling="2026-02-16 20:58:07.35948771 +0000 UTC m=+58.994388010" lastFinishedPulling="2026-02-16 20:58:28.307140567 +0000 UTC m=+79.942040867" observedRunningTime="2026-02-16 21:03:48.295682041 +0000 UTC m=+399.930582381" watchObservedRunningTime="2026-02-16 21:03:51.319154314 +0000 UTC m=+402.954054644" Feb 16 21:03:53.163430 master-0 kubenswrapper[7926]: I0216 21:03:53.163361 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:53.163430 master-0 kubenswrapper[7926]: I0216 21:03:53.163421 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:53.164097 master-0 kubenswrapper[7926]: I0216 21:03:53.163466 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:03:53.164097 master-0 kubenswrapper[7926]: I0216 21:03:53.163946 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"a0f4bf116c475ac57080c53e7f9652de2e9cdcb6db7cccf87a7d3deeef5a1385"} pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 16 21:03:53.164097 master-0 kubenswrapper[7926]: I0216 21:03:53.163976 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" containerID="cri-o://a0f4bf116c475ac57080c53e7f9652de2e9cdcb6db7cccf87a7d3deeef5a1385" gracePeriod=30 Feb 16 21:03:53.164097 master-0 kubenswrapper[7926]: I0216 21:03:53.164005 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:53.164217 master-0 kubenswrapper[7926]: I0216 21:03:53.164086 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:53.689443 master-0 kubenswrapper[7926]: E0216 21:03:53.689382 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-xbd96_openshift-config-operator(59237aa6-6250-4619-8ee5-abae59f04b57)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" Feb 16 21:03:54.099328 master-0 kubenswrapper[7926]: I0216 21:03:54.099212 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" start-of-body= Feb 16 21:03:54.099328 master-0 kubenswrapper[7926]: I0216 21:03:54.099284 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": dial tcp 10.128.0.19:8443: connect: connection refused" Feb 16 21:03:54.313765 master-0 kubenswrapper[7926]: I0216 21:03:54.313717 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/5.log" Feb 16 21:03:54.314393 master-0 kubenswrapper[7926]: I0216 21:03:54.314312 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/4.log" Feb 16 21:03:54.314864 master-0 kubenswrapper[7926]: I0216 21:03:54.314832 7926 generic.go:334] "Generic (PLEG): container finished" podID="59237aa6-6250-4619-8ee5-abae59f04b57" containerID="a0f4bf116c475ac57080c53e7f9652de2e9cdcb6db7cccf87a7d3deeef5a1385" exitCode=255 Feb 16 21:03:54.314924 master-0 kubenswrapper[7926]: I0216 21:03:54.314864 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerDied","Data":"a0f4bf116c475ac57080c53e7f9652de2e9cdcb6db7cccf87a7d3deeef5a1385"} Feb 16 21:03:54.314960 master-0 kubenswrapper[7926]: I0216 21:03:54.314933 7926 scope.go:117] "RemoveContainer" containerID="ac3627020f75f5cd56ecff94b5d8094d6aa1558d6f4f6208d2bc563627046751" Feb 16 21:03:54.316293 master-0 kubenswrapper[7926]: I0216 21:03:54.316256 7926 scope.go:117] "RemoveContainer" containerID="a0f4bf116c475ac57080c53e7f9652de2e9cdcb6db7cccf87a7d3deeef5a1385" Feb 16 21:03:54.316586 master-0 kubenswrapper[7926]: E0216 21:03:54.316549 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-xbd96_openshift-config-operator(59237aa6-6250-4619-8ee5-abae59f04b57)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" Feb 16 21:03:54.962284 master-0 kubenswrapper[7926]: I0216 21:03:54.962157 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:03:54.962284 master-0 kubenswrapper[7926]: I0216 21:03:54.962271 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:54.962608 master-0 kubenswrapper[7926]: I0216 21:03:54.962172 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:03:54.962608 master-0 kubenswrapper[7926]: I0216 21:03:54.962446 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:55.324778 master-0 kubenswrapper[7926]: I0216 21:03:55.324747 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/5.log" Feb 16 21:03:58.094252 master-0 kubenswrapper[7926]: I0216 21:03:58.094183 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:03:58.095134 master-0 kubenswrapper[7926]: I0216 21:03:58.094982 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:59.710947 master-0 kubenswrapper[7926]: I0216 21:03:59.710804 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:03:59.711976 master-0 kubenswrapper[7926]: I0216 21:03:59.710975 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:03:59.711976 master-0 kubenswrapper[7926]: I0216 21:03:59.711927 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"1456824a8c7336f75a4d4627de845c133b21a80d97dbb454f452a64a66ca524f"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 21:03:59.711976 master-0 kubenswrapper[7926]: I0216 21:03:59.711965 7926 scope.go:117] "RemoveContainer" containerID="6912b1edffde7a78bbdc396546e5278ae133791109c955eb557d3109fd4abd06" Feb 16 21:03:59.712179 master-0 kubenswrapper[7926]: I0216 21:03:59.712022 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" containerID="cri-o://1456824a8c7336f75a4d4627de845c133b21a80d97dbb454f452a64a66ca524f" gracePeriod=30 Feb 16 21:04:00.268239 master-0 kubenswrapper[7926]: E0216 21:04:00.268170 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 21:04:00.365451 master-0 kubenswrapper[7926]: I0216 21:04:00.365377 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="1456824a8c7336f75a4d4627de845c133b21a80d97dbb454f452a64a66ca524f" exitCode=2 Feb 16 21:04:00.365451 master-0 kubenswrapper[7926]: I0216 21:04:00.365437 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"1456824a8c7336f75a4d4627de845c133b21a80d97dbb454f452a64a66ca524f"} Feb 16 21:04:00.365700 master-0 kubenswrapper[7926]: I0216 21:04:00.365473 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d"} Feb 16 21:04:00.365700 master-0 kubenswrapper[7926]: I0216 21:04:00.365495 7926 scope.go:117] "RemoveContainer" containerID="0cc0798e5012d359ad3d59e34898cddf8ad150cc9f48b65f4d686bb956001a13" Feb 16 21:04:01.034126 master-0 kubenswrapper[7926]: E0216 21:04:01.034010 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:01.034126 master-0 kubenswrapper[7926]: E0216 21:04:01.034078 7926 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:04:01.380382 master-0 kubenswrapper[7926]: I0216 21:04:01.380192 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"410a4b2cc5dbfa1f91563527799635fb9640404ccc61ef4e12a61d2df9b84a8b"} Feb 16 21:04:03.480606 master-0 kubenswrapper[7926]: E0216 21:04:03.480385 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:04:03.907436 master-0 kubenswrapper[7926]: I0216 21:04:03.907344 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:04:04.314515 master-0 kubenswrapper[7926]: I0216 21:04:04.314415 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:04:04.962291 master-0 kubenswrapper[7926]: I0216 21:04:04.962128 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:04.962291 master-0 kubenswrapper[7926]: I0216 21:04:04.962227 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:04.962823 master-0 kubenswrapper[7926]: I0216 21:04:04.962297 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:04:04.962823 master-0 kubenswrapper[7926]: I0216 21:04:04.962418 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:04.962823 master-0 kubenswrapper[7926]: I0216 21:04:04.962525 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:04.963189 master-0 kubenswrapper[7926]: I0216 21:04:04.963114 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"70c8a58b1f436ad8ca4d491de1284ed96c1d17dc7c8758f9d265ebf6a6d73a38"} pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" containerMessage="Container packageserver failed liveness probe, will be restarted" Feb 16 21:04:04.963304 master-0 kubenswrapper[7926]: I0216 21:04:04.963192 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" containerID="cri-o://70c8a58b1f436ad8ca4d491de1284ed96c1d17dc7c8758f9d265ebf6a6d73a38" gracePeriod=30 Feb 16 21:04:05.418304 master-0 kubenswrapper[7926]: I0216 21:04:05.418195 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/3.log" Feb 16 21:04:05.419119 master-0 kubenswrapper[7926]: I0216 21:04:05.419047 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/2.log" Feb 16 21:04:05.419230 master-0 kubenswrapper[7926]: I0216 21:04:05.419147 7926 generic.go:334] "Generic (PLEG): container finished" podID="b1ac9776-54c4-46ce-b898-01c8cf35e593" containerID="9ef3c9bb3006ad6560cc5f0bdef3d88ed02120a2aaa21f57602a6395354cc9ab" exitCode=1 Feb 16 21:04:05.419376 master-0 kubenswrapper[7926]: I0216 21:04:05.419288 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerDied","Data":"9ef3c9bb3006ad6560cc5f0bdef3d88ed02120a2aaa21f57602a6395354cc9ab"} Feb 16 21:04:05.419456 master-0 kubenswrapper[7926]: I0216 21:04:05.419417 7926 scope.go:117] "RemoveContainer" containerID="065597b5437e593f0a8e56b505329babf0faf4f1f2e62294ff4f61a62c0f9e9c" Feb 16 21:04:05.422693 master-0 kubenswrapper[7926]: I0216 21:04:05.421623 7926 scope.go:117] "RemoveContainer" containerID="9ef3c9bb3006ad6560cc5f0bdef3d88ed02120a2aaa21f57602a6395354cc9ab" Feb 16 21:04:05.422693 master-0 kubenswrapper[7926]: I0216 21:04:05.421840 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/3.log" Feb 16 21:04:05.422693 master-0 kubenswrapper[7926]: E0216 21:04:05.422354 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:04:05.422693 master-0 kubenswrapper[7926]: I0216 21:04:05.422519 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/2.log" Feb 16 21:04:05.422693 master-0 kubenswrapper[7926]: I0216 21:04:05.422579 7926 generic.go:334] "Generic (PLEG): container finished" podID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerID="9b515d5a7a3620fef9281bf66e2c25d3ec90a1c70a0a5cb2470f5419d26f7741" exitCode=255 Feb 16 21:04:05.422693 master-0 kubenswrapper[7926]: I0216 21:04:05.422631 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerDied","Data":"9b515d5a7a3620fef9281bf66e2c25d3ec90a1c70a0a5cb2470f5419d26f7741"} Feb 16 21:04:05.423665 master-0 kubenswrapper[7926]: I0216 21:04:05.423601 7926 scope.go:117] "RemoveContainer" containerID="9b515d5a7a3620fef9281bf66e2c25d3ec90a1c70a0a5cb2470f5419d26f7741" Feb 16 21:04:05.424046 master-0 kubenswrapper[7926]: E0216 21:04:05.423983 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:04:05.465197 master-0 kubenswrapper[7926]: I0216 21:04:05.465143 7926 scope.go:117] "RemoveContainer" containerID="bc0c280e8d6f945eb33fad59cb0d8a4aedc8f5ca975f567efb9b9400f3b825d3" Feb 16 21:04:05.963400 master-0 kubenswrapper[7926]: I0216 21:04:05.963326 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:05.964719 master-0 kubenswrapper[7926]: I0216 21:04:05.963413 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:05.979909 master-0 kubenswrapper[7926]: I0216 21:04:05.979833 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:04:06.430920 master-0 kubenswrapper[7926]: I0216 21:04:06.430843 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/3.log" Feb 16 21:04:06.432755 master-0 kubenswrapper[7926]: I0216 21:04:06.432642 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/3.log" Feb 16 21:04:06.710853 master-0 kubenswrapper[7926]: I0216 21:04:06.710639 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:04:06.861571 master-0 kubenswrapper[7926]: I0216 21:04:06.861493 7926 patch_prober.go:28] interesting pod/authentication-operator-755d954778-8gnq5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Feb 16 21:04:06.861840 master-0 kubenswrapper[7926]: I0216 21:04:06.861573 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Feb 16 21:04:07.093587 master-0 kubenswrapper[7926]: I0216 21:04:07.093521 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:04:07.094285 master-0 kubenswrapper[7926]: I0216 21:04:07.094246 7926 scope.go:117] "RemoveContainer" containerID="9b515d5a7a3620fef9281bf66e2c25d3ec90a1c70a0a5cb2470f5419d26f7741" Feb 16 21:04:07.094693 master-0 kubenswrapper[7926]: E0216 21:04:07.094619 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:04:07.315461 master-0 kubenswrapper[7926]: I0216 21:04:07.315337 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:08.744436 master-0 kubenswrapper[7926]: I0216 21:04:08.744233 7926 scope.go:117] "RemoveContainer" containerID="a0f4bf116c475ac57080c53e7f9652de2e9cdcb6db7cccf87a7d3deeef5a1385" Feb 16 21:04:08.745106 master-0 kubenswrapper[7926]: E0216 21:04:08.744759 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-xbd96_openshift-config-operator(59237aa6-6250-4619-8ee5-abae59f04b57)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" Feb 16 21:04:09.456982 master-0 kubenswrapper[7926]: I0216 21:04:09.456899 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/3.log" Feb 16 21:04:09.457737 master-0 kubenswrapper[7926]: I0216 21:04:09.457701 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/2.log" Feb 16 21:04:09.458256 master-0 kubenswrapper[7926]: I0216 21:04:09.458172 7926 generic.go:334] "Generic (PLEG): container finished" podID="8b648d9e-a892-4951-b0e2-fed6b16273d4" containerID="41ef5f9abc41605ba4f43759411cc04f3fe23add167a10d83f8a22bd50eade97" exitCode=1 Feb 16 21:04:09.458340 master-0 kubenswrapper[7926]: I0216 21:04:09.458256 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerDied","Data":"41ef5f9abc41605ba4f43759411cc04f3fe23add167a10d83f8a22bd50eade97"} Feb 16 21:04:09.458440 master-0 kubenswrapper[7926]: I0216 21:04:09.458363 7926 scope.go:117] "RemoveContainer" containerID="85337e79dc5b98043d14ed182cca1ddb76f517beb26b734efc337c20a18b289f" Feb 16 21:04:09.459283 master-0 kubenswrapper[7926]: I0216 21:04:09.459181 7926 scope.go:117] "RemoveContainer" containerID="41ef5f9abc41605ba4f43759411cc04f3fe23add167a10d83f8a22bd50eade97" Feb 16 21:04:09.459537 master-0 kubenswrapper[7926]: E0216 21:04:09.459496 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:04:09.710832 master-0 kubenswrapper[7926]: I0216 21:04:09.710726 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:10.466189 master-0 kubenswrapper[7926]: I0216 21:04:10.466128 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/3.log" Feb 16 21:04:11.663755 master-0 kubenswrapper[7926]: I0216 21:04:11.663630 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:04:14.494293 master-0 kubenswrapper[7926]: I0216 21:04:14.494250 7926 generic.go:334] "Generic (PLEG): container finished" podID="a5d4ac48-aed3-46b9-9b2a-d741121e05b4" containerID="22be26c79a1d2adc3db5f6e113ba92cfcf47f9a286ce35fb6273d18f0ea1545e" exitCode=0 Feb 16 21:04:14.494293 master-0 kubenswrapper[7926]: I0216 21:04:14.494298 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" event={"ID":"a5d4ac48-aed3-46b9-9b2a-d741121e05b4","Type":"ContainerDied","Data":"22be26c79a1d2adc3db5f6e113ba92cfcf47f9a286ce35fb6273d18f0ea1545e"} Feb 16 21:04:14.496012 master-0 kubenswrapper[7926]: I0216 21:04:14.494767 7926 scope.go:117] "RemoveContainer" containerID="22be26c79a1d2adc3db5f6e113ba92cfcf47f9a286ce35fb6273d18f0ea1545e" Feb 16 21:04:14.962062 master-0 kubenswrapper[7926]: I0216 21:04:14.961984 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:14.962327 master-0 kubenswrapper[7926]: I0216 21:04:14.962065 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:15.503188 master-0 kubenswrapper[7926]: I0216 21:04:15.503111 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" event={"ID":"a5d4ac48-aed3-46b9-9b2a-d741121e05b4","Type":"ContainerStarted","Data":"6343280e0df4085e2272811bcce84fa21c423071562a8310728970f3dd76b136"} Feb 16 21:04:16.860472 master-0 kubenswrapper[7926]: I0216 21:04:16.860357 7926 patch_prober.go:28] interesting pod/authentication-operator-755d954778-8gnq5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Feb 16 21:04:16.860472 master-0 kubenswrapper[7926]: I0216 21:04:16.860450 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Feb 16 21:04:17.314267 master-0 kubenswrapper[7926]: I0216 21:04:17.314148 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:17.314267 master-0 kubenswrapper[7926]: I0216 21:04:17.314291 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:04:17.315222 master-0 kubenswrapper[7926]: I0216 21:04:17.315164 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"410a4b2cc5dbfa1f91563527799635fb9640404ccc61ef4e12a61d2df9b84a8b"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 16 21:04:17.315319 master-0 kubenswrapper[7926]: I0216 21:04:17.315231 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" containerID="cri-o://410a4b2cc5dbfa1f91563527799635fb9640404ccc61ef4e12a61d2df9b84a8b" gracePeriod=30 Feb 16 21:04:17.521873 master-0 kubenswrapper[7926]: I0216 21:04:17.521772 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="410a4b2cc5dbfa1f91563527799635fb9640404ccc61ef4e12a61d2df9b84a8b" exitCode=255 Feb 16 21:04:17.521873 master-0 kubenswrapper[7926]: I0216 21:04:17.521854 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"410a4b2cc5dbfa1f91563527799635fb9640404ccc61ef4e12a61d2df9b84a8b"} Feb 16 21:04:17.522190 master-0 kubenswrapper[7926]: I0216 21:04:17.521941 7926 scope.go:117] "RemoveContainer" containerID="6912b1edffde7a78bbdc396546e5278ae133791109c955eb557d3109fd4abd06" Feb 16 21:04:18.536766 master-0 kubenswrapper[7926]: I0216 21:04:18.536717 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"004bfc046616ade5acce3345f914946a2b1075ac66e815294a04a1ccd9e0b9a2"} Feb 16 21:04:18.743437 master-0 kubenswrapper[7926]: I0216 21:04:18.743377 7926 scope.go:117] "RemoveContainer" containerID="9ef3c9bb3006ad6560cc5f0bdef3d88ed02120a2aaa21f57602a6395354cc9ab" Feb 16 21:04:18.743841 master-0 kubenswrapper[7926]: E0216 21:04:18.743785 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:04:18.744092 master-0 kubenswrapper[7926]: I0216 21:04:18.744033 7926 scope.go:117] "RemoveContainer" containerID="9b515d5a7a3620fef9281bf66e2c25d3ec90a1c70a0a5cb2470f5419d26f7741" Feb 16 21:04:18.744483 master-0 kubenswrapper[7926]: E0216 21:04:18.744438 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:04:19.008733 master-0 kubenswrapper[7926]: E0216 21:04:19.008516 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{authentication-operator-755d954778-8gnq5.1894d5b059b55b5c openshift-authentication-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication-operator,Name:authentication-operator-755d954778-8gnq5,UID:27c20f63-9bfb-4703-94d5-0c65475e08d1,APIVersion:v1,ResourceVersion:3727,FieldPath:spec.containers{authentication-operator},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:26.859465564 +0000 UTC m=+78.494365864,LastTimestamp:2026-02-16 20:58:26.859465564 +0000 UTC m=+78.494365864,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:04:19.710805 master-0 kubenswrapper[7926]: I0216 21:04:19.710704 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:20.482085 master-0 kubenswrapper[7926]: E0216 21:04:20.482024 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:04:20.738270 master-0 kubenswrapper[7926]: I0216 21:04:20.738135 7926 scope.go:117] "RemoveContainer" containerID="41ef5f9abc41605ba4f43759411cc04f3fe23add167a10d83f8a22bd50eade97" Feb 16 21:04:20.739284 master-0 kubenswrapper[7926]: I0216 21:04:20.738483 7926 scope.go:117] "RemoveContainer" containerID="a0f4bf116c475ac57080c53e7f9652de2e9cdcb6db7cccf87a7d3deeef5a1385" Feb 16 21:04:20.739284 master-0 kubenswrapper[7926]: E0216 21:04:20.738505 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:04:20.739284 master-0 kubenswrapper[7926]: E0216 21:04:20.738886 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-xbd96_openshift-config-operator(59237aa6-6250-4619-8ee5-abae59f04b57)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" Feb 16 21:04:21.093934 master-0 kubenswrapper[7926]: E0216 21:04:21.093626 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:04:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:04:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:04:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:04:11Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e2f869b1c4f98a628b2e54c1516a0d0c09c760c91e0e1a940cb76149217661b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:97930d07a108f20287bd5ceb046a5ab125604b2e3564077db9f7d7c077cc5852\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701129928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:0b4dc203ac00318362470f07842ed97dc1c724d32fa07c1613f15fcf4bf54ec8\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cc6c845176bbdca205e7c9628ea993ed70da3b2516bac35d68d9f52059fad674\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234421961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:06dcb25b4ae74ef159663cc2318f84e4665c7889b38ed62940259e5edd2b576f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a81101fb2bf3c75acf3e62bf09b19b67bccbde0faf09bd379a491f5eadb8afc1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213098166},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13\\\"],\\\"sizeBytes\\\":875178413},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9\\\"],\\\"sizeBytes\\\":857023173},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471\\\"],\\\"sizeBytes\\\":552251951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\\\"],\\\"sizeBytes\\\":508404525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30\\\"],\\\"sizeBytes\\\":499489508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\"],\\\"sizeBytes\\\":479280723},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\"],\\\"sizeBytes\\\":479006001},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49\\\"],\\\"sizeBytes\\\":465648392},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb\\\"],\\\"sizeBytes\\\":465507019},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\"],\\\"sizeBytes\\\":463090242}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:23.907839 master-0 kubenswrapper[7926]: I0216 21:04:23.907777 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:04:24.314562 master-0 kubenswrapper[7926]: I0216 21:04:24.314518 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:04:24.962112 master-0 kubenswrapper[7926]: I0216 21:04:24.962054 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:24.962710 master-0 kubenswrapper[7926]: I0216 21:04:24.962123 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:26.859891 master-0 kubenswrapper[7926]: I0216 21:04:26.859821 7926 patch_prober.go:28] interesting pod/authentication-operator-755d954778-8gnq5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" start-of-body= Feb 16 21:04:26.860882 master-0 kubenswrapper[7926]: I0216 21:04:26.859906 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": dial tcp 10.128.0.15:8443: connect: connection refused" Feb 16 21:04:26.860882 master-0 kubenswrapper[7926]: I0216 21:04:26.859968 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:04:26.860882 master-0 kubenswrapper[7926]: I0216 21:04:26.860755 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"bae2526e4dde061e6c7a8ef722773dcd93504e4ed1b17f4a15386f5a7579875d"} pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 16 21:04:26.860882 master-0 kubenswrapper[7926]: I0216 21:04:26.860828 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerName="authentication-operator" containerID="cri-o://bae2526e4dde061e6c7a8ef722773dcd93504e4ed1b17f4a15386f5a7579875d" gracePeriod=30 Feb 16 21:04:27.315333 master-0 kubenswrapper[7926]: I0216 21:04:27.315171 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:27.595908 master-0 kubenswrapper[7926]: I0216 21:04:27.595749 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/4.log" Feb 16 21:04:27.596338 master-0 kubenswrapper[7926]: I0216 21:04:27.596290 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/3.log" Feb 16 21:04:27.596441 master-0 kubenswrapper[7926]: I0216 21:04:27.596350 7926 generic.go:334] "Generic (PLEG): container finished" podID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerID="bae2526e4dde061e6c7a8ef722773dcd93504e4ed1b17f4a15386f5a7579875d" exitCode=255 Feb 16 21:04:27.596441 master-0 kubenswrapper[7926]: I0216 21:04:27.596386 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerDied","Data":"bae2526e4dde061e6c7a8ef722773dcd93504e4ed1b17f4a15386f5a7579875d"} Feb 16 21:04:27.596441 master-0 kubenswrapper[7926]: I0216 21:04:27.596419 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerStarted","Data":"1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32"} Feb 16 21:04:27.596441 master-0 kubenswrapper[7926]: I0216 21:04:27.596437 7926 scope.go:117] "RemoveContainer" containerID="42d2b8ae4604c72ca108f769893f6589ee95474077ff8dd9cf87399459c2ec53" Feb 16 21:04:28.602529 master-0 kubenswrapper[7926]: I0216 21:04:28.602443 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/4.log" Feb 16 21:04:29.711458 master-0 kubenswrapper[7926]: I0216 21:04:29.711317 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:29.712065 master-0 kubenswrapper[7926]: I0216 21:04:29.711521 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:04:29.712769 master-0 kubenswrapper[7926]: I0216 21:04:29.712717 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 21:04:29.712887 master-0 kubenswrapper[7926]: I0216 21:04:29.712840 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" containerID="cri-o://9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" gracePeriod=30 Feb 16 21:04:29.738044 master-0 kubenswrapper[7926]: I0216 21:04:29.737998 7926 scope.go:117] "RemoveContainer" containerID="9ef3c9bb3006ad6560cc5f0bdef3d88ed02120a2aaa21f57602a6395354cc9ab" Feb 16 21:04:29.738231 master-0 kubenswrapper[7926]: E0216 21:04:29.738203 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:04:29.831965 master-0 kubenswrapper[7926]: E0216 21:04:29.831916 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:04:30.619785 master-0 kubenswrapper[7926]: I0216 21:04:30.619681 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" exitCode=2 Feb 16 21:04:30.619785 master-0 kubenswrapper[7926]: I0216 21:04:30.619745 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d"} Feb 16 21:04:30.620318 master-0 kubenswrapper[7926]: I0216 21:04:30.619862 7926 scope.go:117] "RemoveContainer" containerID="1456824a8c7336f75a4d4627de845c133b21a80d97dbb454f452a64a66ca524f" Feb 16 21:04:30.620423 master-0 kubenswrapper[7926]: I0216 21:04:30.620365 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:04:30.620704 master-0 kubenswrapper[7926]: E0216 21:04:30.620638 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:04:30.739172 master-0 kubenswrapper[7926]: I0216 21:04:30.739109 7926 scope.go:117] "RemoveContainer" containerID="9b515d5a7a3620fef9281bf66e2c25d3ec90a1c70a0a5cb2470f5419d26f7741" Feb 16 21:04:30.739728 master-0 kubenswrapper[7926]: E0216 21:04:30.739389 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:04:31.094620 master-0 kubenswrapper[7926]: E0216 21:04:31.094527 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:31.739030 master-0 kubenswrapper[7926]: I0216 21:04:31.738954 7926 scope.go:117] "RemoveContainer" containerID="41ef5f9abc41605ba4f43759411cc04f3fe23add167a10d83f8a22bd50eade97" Feb 16 21:04:31.739519 master-0 kubenswrapper[7926]: E0216 21:04:31.739446 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:04:32.738374 master-0 kubenswrapper[7926]: I0216 21:04:32.738315 7926 scope.go:117] "RemoveContainer" containerID="a0f4bf116c475ac57080c53e7f9652de2e9cdcb6db7cccf87a7d3deeef5a1385" Feb 16 21:04:32.738979 master-0 kubenswrapper[7926]: E0216 21:04:32.738701 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-7c6bdb986f-xbd96_openshift-config-operator(59237aa6-6250-4619-8ee5-abae59f04b57)\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" Feb 16 21:04:34.884583 master-0 kubenswrapper[7926]: I0216 21:04:34.884524 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": read tcp 10.128.0.2:53868->10.128.0.64:5443: read: connection reset by peer" start-of-body= Feb 16 21:04:34.885038 master-0 kubenswrapper[7926]: I0216 21:04:34.884607 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": read tcp 10.128.0.2:53868->10.128.0.64:5443: read: connection reset by peer" Feb 16 21:04:35.677834 master-0 kubenswrapper[7926]: I0216 21:04:35.677740 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4_319dc882-e1f5-40f9-99f4-2bae028337e5/packageserver/0.log" Feb 16 21:04:35.677834 master-0 kubenswrapper[7926]: I0216 21:04:35.677835 7926 generic.go:334] "Generic (PLEG): container finished" podID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerID="70c8a58b1f436ad8ca4d491de1284ed96c1d17dc7c8758f9d265ebf6a6d73a38" exitCode=2 Feb 16 21:04:35.678332 master-0 kubenswrapper[7926]: I0216 21:04:35.677906 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" event={"ID":"319dc882-e1f5-40f9-99f4-2bae028337e5","Type":"ContainerDied","Data":"70c8a58b1f436ad8ca4d491de1284ed96c1d17dc7c8758f9d265ebf6a6d73a38"} Feb 16 21:04:35.678332 master-0 kubenswrapper[7926]: I0216 21:04:35.678020 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" event={"ID":"319dc882-e1f5-40f9-99f4-2bae028337e5","Type":"ContainerStarted","Data":"203b091a662b4912838a798e07794a8caa755508028a6b4fa5f1ef8b83de89af"} Feb 16 21:04:35.678581 master-0 kubenswrapper[7926]: I0216 21:04:35.678526 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:04:36.678582 master-0 kubenswrapper[7926]: I0216 21:04:36.678446 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:36.679798 master-0 kubenswrapper[7926]: I0216 21:04:36.678594 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:36.988778 master-0 kubenswrapper[7926]: I0216 21:04:36.988543 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:04:36.989181 master-0 kubenswrapper[7926]: I0216 21:04:36.989139 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:04:36.989428 master-0 kubenswrapper[7926]: E0216 21:04:36.989378 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:04:37.314933 master-0 kubenswrapper[7926]: I0216 21:04:37.314773 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:37.695795 master-0 kubenswrapper[7926]: I0216 21:04:37.695558 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:37.695795 master-0 kubenswrapper[7926]: I0216 21:04:37.695712 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:41.095458 master-0 kubenswrapper[7926]: E0216 21:04:41.095403 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:42.738200 master-0 kubenswrapper[7926]: I0216 21:04:42.738116 7926 scope.go:117] "RemoveContainer" containerID="9b515d5a7a3620fef9281bf66e2c25d3ec90a1c70a0a5cb2470f5419d26f7741" Feb 16 21:04:42.738866 master-0 kubenswrapper[7926]: E0216 21:04:42.738364 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:04:43.738025 master-0 kubenswrapper[7926]: I0216 21:04:43.737975 7926 scope.go:117] "RemoveContainer" containerID="9ef3c9bb3006ad6560cc5f0bdef3d88ed02120a2aaa21f57602a6395354cc9ab" Feb 16 21:04:43.738233 master-0 kubenswrapper[7926]: E0216 21:04:43.738199 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:04:43.738546 master-0 kubenswrapper[7926]: I0216 21:04:43.738427 7926 scope.go:117] "RemoveContainer" containerID="a0f4bf116c475ac57080c53e7f9652de2e9cdcb6db7cccf87a7d3deeef5a1385" Feb 16 21:04:44.752792 master-0 kubenswrapper[7926]: I0216 21:04:44.752687 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/5.log" Feb 16 21:04:44.753869 master-0 kubenswrapper[7926]: I0216 21:04:44.753186 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerStarted","Data":"0715c2c6bc16d3adc1361563ad51b4de11f77937d1f51eb61f3cd34b96856d0c"} Feb 16 21:04:44.753869 master-0 kubenswrapper[7926]: I0216 21:04:44.753722 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:04:44.962249 master-0 kubenswrapper[7926]: I0216 21:04:44.962131 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:44.962537 master-0 kubenswrapper[7926]: I0216 21:04:44.962174 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:44.962537 master-0 kubenswrapper[7926]: I0216 21:04:44.962382 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:44.962537 master-0 kubenswrapper[7926]: I0216 21:04:44.962332 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:45.738788 master-0 kubenswrapper[7926]: I0216 21:04:45.738735 7926 scope.go:117] "RemoveContainer" containerID="41ef5f9abc41605ba4f43759411cc04f3fe23add167a10d83f8a22bd50eade97" Feb 16 21:04:45.739074 master-0 kubenswrapper[7926]: E0216 21:04:45.738981 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:04:47.315397 master-0 kubenswrapper[7926]: I0216 21:04:47.315279 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:47.315397 master-0 kubenswrapper[7926]: I0216 21:04:47.315405 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:04:47.316423 master-0 kubenswrapper[7926]: I0216 21:04:47.316091 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:04:47.316423 master-0 kubenswrapper[7926]: I0216 21:04:47.316183 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"004bfc046616ade5acce3345f914946a2b1075ac66e815294a04a1ccd9e0b9a2"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 16 21:04:47.316423 master-0 kubenswrapper[7926]: I0216 21:04:47.316237 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" containerID="cri-o://004bfc046616ade5acce3345f914946a2b1075ac66e815294a04a1ccd9e0b9a2" gracePeriod=30 Feb 16 21:04:48.165001 master-0 kubenswrapper[7926]: I0216 21:04:48.164860 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:48.165296 master-0 kubenswrapper[7926]: I0216 21:04:48.165054 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:49.098456 master-0 kubenswrapper[7926]: I0216 21:04:49.098336 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:49.099211 master-0 kubenswrapper[7926]: I0216 21:04:49.098481 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:51.096516 master-0 kubenswrapper[7926]: E0216 21:04:51.096328 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Feb 16 21:04:51.164083 master-0 kubenswrapper[7926]: I0216 21:04:51.163922 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:51.164083 master-0 kubenswrapper[7926]: I0216 21:04:51.164066 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:52.098249 master-0 kubenswrapper[7926]: I0216 21:04:52.098179 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:52.098755 master-0 kubenswrapper[7926]: I0216 21:04:52.098248 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:54.163277 master-0 kubenswrapper[7926]: I0216 21:04:54.163150 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:54.163934 master-0 kubenswrapper[7926]: I0216 21:04:54.163325 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:54.163934 master-0 kubenswrapper[7926]: I0216 21:04:54.163391 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:04:54.164146 master-0 kubenswrapper[7926]: I0216 21:04:54.164106 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"0715c2c6bc16d3adc1361563ad51b4de11f77937d1f51eb61f3cd34b96856d0c"} pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 16 21:04:54.164209 master-0 kubenswrapper[7926]: I0216 21:04:54.164154 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" containerID="cri-o://0715c2c6bc16d3adc1361563ad51b4de11f77937d1f51eb61f3cd34b96856d0c" gracePeriod=30 Feb 16 21:04:54.738081 master-0 kubenswrapper[7926]: I0216 21:04:54.738045 7926 scope.go:117] "RemoveContainer" containerID="9ef3c9bb3006ad6560cc5f0bdef3d88ed02120a2aaa21f57602a6395354cc9ab" Feb 16 21:04:54.962188 master-0 kubenswrapper[7926]: I0216 21:04:54.962117 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:54.962188 master-0 kubenswrapper[7926]: I0216 21:04:54.962190 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:54.962413 master-0 kubenswrapper[7926]: I0216 21:04:54.962189 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:54.962413 master-0 kubenswrapper[7926]: I0216 21:04:54.962221 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:55.098117 master-0 kubenswrapper[7926]: I0216 21:04:55.098020 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:55.098117 master-0 kubenswrapper[7926]: I0216 21:04:55.098109 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:55.829760 master-0 kubenswrapper[7926]: I0216 21:04:55.829623 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/3.log" Feb 16 21:04:55.829760 master-0 kubenswrapper[7926]: I0216 21:04:55.829772 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerStarted","Data":"67a3e9d9b5f56d4ee0c0f00f8a41a1f28f49d33cce601ce8e280273be299fa4f"} Feb 16 21:04:56.098710 master-0 kubenswrapper[7926]: I0216 21:04:56.098521 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:56.098710 master-0 kubenswrapper[7926]: I0216 21:04:56.098602 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:56.534005 master-0 kubenswrapper[7926]: I0216 21:04:56.533891 7926 patch_prober.go:28] interesting pod/etcd-operator-67bf55ccdd-8cllz container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": net/http: TLS handshake timeout" start-of-body= Feb 16 21:04:56.534509 master-0 kubenswrapper[7926]: I0216 21:04:56.534005 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" podUID="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": net/http: TLS handshake timeout" Feb 16 21:04:56.534509 master-0 kubenswrapper[7926]: I0216 21:04:56.534062 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:04:56.534814 master-0 kubenswrapper[7926]: I0216 21:04:56.534753 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="etcd-operator" containerStatusID={"Type":"cri-o","ID":"316bcd2b73e15fab60d8618d92eb77f101f2f53e423adb64b0f374a1f7fcda3a"} pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" containerMessage="Container etcd-operator failed liveness probe, will be restarted" Feb 16 21:04:56.534920 master-0 kubenswrapper[7926]: I0216 21:04:56.534819 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" podUID="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" containerName="etcd-operator" containerID="cri-o://316bcd2b73e15fab60d8618d92eb77f101f2f53e423adb64b0f374a1f7fcda3a" gracePeriod=30 Feb 16 21:04:56.739032 master-0 kubenswrapper[7926]: I0216 21:04:56.738940 7926 scope.go:117] "RemoveContainer" containerID="9b515d5a7a3620fef9281bf66e2c25d3ec90a1c70a0a5cb2470f5419d26f7741" Feb 16 21:04:56.739630 master-0 kubenswrapper[7926]: I0216 21:04:56.739556 7926 scope.go:117] "RemoveContainer" containerID="41ef5f9abc41605ba4f43759411cc04f3fe23add167a10d83f8a22bd50eade97" Feb 16 21:04:57.844377 master-0 kubenswrapper[7926]: I0216 21:04:57.844274 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/3.log" Feb 16 21:04:57.845361 master-0 kubenswrapper[7926]: I0216 21:04:57.844448 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerStarted","Data":"8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954"} Feb 16 21:04:57.845361 master-0 kubenswrapper[7926]: I0216 21:04:57.844917 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:04:57.848892 master-0 kubenswrapper[7926]: I0216 21:04:57.848841 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/3.log" Feb 16 21:04:57.849533 master-0 kubenswrapper[7926]: I0216 21:04:57.849455 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"6774523bbae3d7abd16dc2e39c9e808fff70ea7aaf2e57c4f294e7c707bbf785"} Feb 16 21:04:58.099089 master-0 kubenswrapper[7926]: I0216 21:04:58.098947 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:58.099089 master-0 kubenswrapper[7926]: I0216 21:04:58.099061 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:58.845904 master-0 kubenswrapper[7926]: I0216 21:04:58.845810 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:58.846571 master-0 kubenswrapper[7926]: I0216 21:04:58.845944 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:04:59.856138 master-0 kubenswrapper[7926]: I0216 21:04:59.855985 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:04:59.856908 master-0 kubenswrapper[7926]: I0216 21:04:59.856189 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:01.099618 master-0 kubenswrapper[7926]: I0216 21:05:01.099441 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:01.099618 master-0 kubenswrapper[7926]: I0216 21:05:01.099572 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:04.099744 master-0 kubenswrapper[7926]: I0216 21:05:04.099628 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:04.101048 master-0 kubenswrapper[7926]: I0216 21:05:04.100968 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:04.962383 master-0 kubenswrapper[7926]: I0216 21:05:04.962273 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:04.962740 master-0 kubenswrapper[7926]: I0216 21:05:04.962349 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:04.962740 master-0 kubenswrapper[7926]: I0216 21:05:04.962398 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:04.962740 master-0 kubenswrapper[7926]: I0216 21:05:04.962530 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:04.962740 master-0 kubenswrapper[7926]: I0216 21:05:04.962612 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:05:04.963472 master-0 kubenswrapper[7926]: I0216 21:05:04.963422 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"203b091a662b4912838a798e07794a8caa755508028a6b4fa5f1ef8b83de89af"} pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" containerMessage="Container packageserver failed liveness probe, will be restarted" Feb 16 21:05:04.963557 master-0 kubenswrapper[7926]: I0216 21:05:04.963484 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" containerID="cri-o://203b091a662b4912838a798e07794a8caa755508028a6b4fa5f1ef8b83de89af" gracePeriod=30 Feb 16 21:05:05.964115 master-0 kubenswrapper[7926]: I0216 21:05:05.963877 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:05.964115 master-0 kubenswrapper[7926]: I0216 21:05:05.963979 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:07.099587 master-0 kubenswrapper[7926]: I0216 21:05:07.099360 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:07.099587 master-0 kubenswrapper[7926]: I0216 21:05:07.099486 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:07.860460 master-0 kubenswrapper[7926]: I0216 21:05:07.860346 7926 patch_prober.go:28] interesting pod/authentication-operator-755d954778-8gnq5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:07.860785 master-0 kubenswrapper[7926]: I0216 21:05:07.860458 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:08.093773 master-0 kubenswrapper[7926]: I0216 21:05:08.093675 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:08.093773 master-0 kubenswrapper[7926]: I0216 21:05:08.093771 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:10.098271 master-0 kubenswrapper[7926]: I0216 21:05:10.098200 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:10.099177 master-0 kubenswrapper[7926]: I0216 21:05:10.098288 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:11.764949 master-0 kubenswrapper[7926]: I0216 21:05:11.764867 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 16 21:05:13.098896 master-0 kubenswrapper[7926]: I0216 21:05:13.098771 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:13.098896 master-0 kubenswrapper[7926]: I0216 21:05:13.098906 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:14.961861 master-0 kubenswrapper[7926]: I0216 21:05:14.961775 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:14.962503 master-0 kubenswrapper[7926]: I0216 21:05:14.961872 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:16.098742 master-0 kubenswrapper[7926]: I0216 21:05:16.098615 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:16.098742 master-0 kubenswrapper[7926]: I0216 21:05:16.098724 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:17.859209 master-0 kubenswrapper[7926]: I0216 21:05:17.859116 7926 patch_prober.go:28] interesting pod/authentication-operator-755d954778-8gnq5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:17.860136 master-0 kubenswrapper[7926]: I0216 21:05:17.859237 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:18.002996 master-0 kubenswrapper[7926]: I0216 21:05:18.002917 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="004bfc046616ade5acce3345f914946a2b1075ac66e815294a04a1ccd9e0b9a2" exitCode=137 Feb 16 21:05:18.002996 master-0 kubenswrapper[7926]: I0216 21:05:18.002983 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"004bfc046616ade5acce3345f914946a2b1075ac66e815294a04a1ccd9e0b9a2"} Feb 16 21:05:18.003309 master-0 kubenswrapper[7926]: I0216 21:05:18.003039 7926 scope.go:117] "RemoveContainer" containerID="410a4b2cc5dbfa1f91563527799635fb9640404ccc61ef4e12a61d2df9b84a8b" Feb 16 21:05:18.092450 master-0 kubenswrapper[7926]: E0216 21:05:18.092394 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:05:18.094600 master-0 kubenswrapper[7926]: I0216 21:05:18.094520 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:18.094660 master-0 kubenswrapper[7926]: I0216 21:05:18.094586 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:18.794277 master-0 kubenswrapper[7926]: I0216 21:05:18.794134 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=7.794109681 podStartE2EDuration="7.794109681s" podCreationTimestamp="2026-02-16 21:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:05:18.791501187 +0000 UTC m=+490.426401527" watchObservedRunningTime="2026-02-16 21:05:18.794109681 +0000 UTC m=+490.429010011" Feb 16 21:05:19.013961 master-0 kubenswrapper[7926]: I0216 21:05:19.013906 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"6dfa6b8d2b84acd49a7559619cbb2034fe2294937bd8d4e0f86679d02bd2078a"} Feb 16 21:05:19.014507 master-0 kubenswrapper[7926]: I0216 21:05:19.014491 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:05:19.014871 master-0 kubenswrapper[7926]: E0216 21:05:19.014817 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:05:19.098641 master-0 kubenswrapper[7926]: I0216 21:05:19.098473 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:19.098864 master-0 kubenswrapper[7926]: I0216 21:05:19.098595 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:20.022834 master-0 kubenswrapper[7926]: I0216 21:05:20.022732 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:05:20.024064 master-0 kubenswrapper[7926]: E0216 21:05:20.023111 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:05:22.099216 master-0 kubenswrapper[7926]: I0216 21:05:22.099111 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:22.099795 master-0 kubenswrapper[7926]: I0216 21:05:22.099225 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:23.907149 master-0 kubenswrapper[7926]: I0216 21:05:23.907034 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:05:23.907960 master-0 kubenswrapper[7926]: I0216 21:05:23.907775 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:05:23.908050 master-0 kubenswrapper[7926]: E0216 21:05:23.908029 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:05:24.179143 master-0 kubenswrapper[7926]: I0216 21:05:24.178965 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:43568->10.128.0.19:8443: read: connection reset by peer" start-of-body= Feb 16 21:05:24.179143 master-0 kubenswrapper[7926]: I0216 21:05:24.179021 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": read tcp 10.128.0.2:43568->10.128.0.19:8443: read: connection reset by peer" Feb 16 21:05:24.314502 master-0 kubenswrapper[7926]: I0216 21:05:24.314421 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:05:24.315221 master-0 kubenswrapper[7926]: I0216 21:05:24.315178 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:05:24.315578 master-0 kubenswrapper[7926]: E0216 21:05:24.315530 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:05:24.961790 master-0 kubenswrapper[7926]: I0216 21:05:24.961675 7926 patch_prober.go:28] interesting pod/packageserver-78d4b6b677-npmx4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:24.961790 master-0 kubenswrapper[7926]: I0216 21:05:24.961767 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" podUID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.64:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:25.066990 master-0 kubenswrapper[7926]: I0216 21:05:25.066915 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/6.log" Feb 16 21:05:25.067540 master-0 kubenswrapper[7926]: I0216 21:05:25.067499 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/5.log" Feb 16 21:05:25.068109 master-0 kubenswrapper[7926]: I0216 21:05:25.068062 7926 generic.go:334] "Generic (PLEG): container finished" podID="59237aa6-6250-4619-8ee5-abae59f04b57" containerID="0715c2c6bc16d3adc1361563ad51b4de11f77937d1f51eb61f3cd34b96856d0c" exitCode=137 Feb 16 21:05:25.068232 master-0 kubenswrapper[7926]: I0216 21:05:25.068122 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerDied","Data":"0715c2c6bc16d3adc1361563ad51b4de11f77937d1f51eb61f3cd34b96856d0c"} Feb 16 21:05:25.068232 master-0 kubenswrapper[7926]: I0216 21:05:25.068209 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerStarted","Data":"302bf12f6109c01eb273603d5fb2413e60f821dd662712bbc7e00c4eafc2b54f"} Feb 16 21:05:25.068366 master-0 kubenswrapper[7926]: I0216 21:05:25.068244 7926 scope.go:117] "RemoveContainer" containerID="a0f4bf116c475ac57080c53e7f9652de2e9cdcb6db7cccf87a7d3deeef5a1385" Feb 16 21:05:25.068685 master-0 kubenswrapper[7926]: I0216 21:05:25.068623 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:05:26.086187 master-0 kubenswrapper[7926]: I0216 21:05:26.086102 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/6.log" Feb 16 21:05:27.097479 master-0 kubenswrapper[7926]: I0216 21:05:27.097369 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-8cllz_70d217a9-86b7-47b9-a7da-9ac920b9c7c2/etcd-operator/3.log" Feb 16 21:05:27.098007 master-0 kubenswrapper[7926]: I0216 21:05:27.097880 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-8cllz_70d217a9-86b7-47b9-a7da-9ac920b9c7c2/etcd-operator/2.log" Feb 16 21:05:27.098007 master-0 kubenswrapper[7926]: I0216 21:05:27.097927 7926 generic.go:334] "Generic (PLEG): container finished" podID="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" containerID="316bcd2b73e15fab60d8618d92eb77f101f2f53e423adb64b0f374a1f7fcda3a" exitCode=137 Feb 16 21:05:27.098007 master-0 kubenswrapper[7926]: I0216 21:05:27.097968 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" event={"ID":"70d217a9-86b7-47b9-a7da-9ac920b9c7c2","Type":"ContainerDied","Data":"316bcd2b73e15fab60d8618d92eb77f101f2f53e423adb64b0f374a1f7fcda3a"} Feb 16 21:05:27.098007 master-0 kubenswrapper[7926]: I0216 21:05:27.097996 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" event={"ID":"70d217a9-86b7-47b9-a7da-9ac920b9c7c2","Type":"ContainerStarted","Data":"4ac247b9876f21c966ab93ed72aa48642f97c92d4ad20edb90a8d4785ced5367"} Feb 16 21:05:27.098164 master-0 kubenswrapper[7926]: I0216 21:05:27.098017 7926 scope.go:117] "RemoveContainer" containerID="6b4aa228ac152077a166b064e9b5bf093a0844f95733cd091a0e3bf8ac6b0c9d" Feb 16 21:05:27.314499 master-0 kubenswrapper[7926]: I0216 21:05:27.314357 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:27.860691 master-0 kubenswrapper[7926]: I0216 21:05:27.860597 7926 patch_prober.go:28] interesting pod/authentication-operator-755d954778-8gnq5 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:27.860987 master-0 kubenswrapper[7926]: I0216 21:05:27.860700 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:27.860987 master-0 kubenswrapper[7926]: I0216 21:05:27.860751 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:05:27.861355 master-0 kubenswrapper[7926]: I0216 21:05:27.861312 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32"} pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 16 21:05:27.861355 master-0 kubenswrapper[7926]: I0216 21:05:27.861345 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerName="authentication-operator" containerID="cri-o://1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32" gracePeriod=30 Feb 16 21:05:28.094334 master-0 kubenswrapper[7926]: I0216 21:05:28.094162 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:28.094636 master-0 kubenswrapper[7926]: I0216 21:05:28.094338 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:28.094636 master-0 kubenswrapper[7926]: I0216 21:05:28.094453 7926 patch_prober.go:28] interesting pod/route-controller-manager-749ccd9c56-wzsnf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:28.094636 master-0 kubenswrapper[7926]: I0216 21:05:28.094481 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:28.099174 master-0 kubenswrapper[7926]: I0216 21:05:28.099086 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:28.099775 master-0 kubenswrapper[7926]: I0216 21:05:28.099167 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:28.111088 master-0 kubenswrapper[7926]: I0216 21:05:28.110968 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-8cllz_70d217a9-86b7-47b9-a7da-9ac920b9c7c2/etcd-operator/3.log" Feb 16 21:05:30.164002 master-0 kubenswrapper[7926]: I0216 21:05:30.163891 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:30.164002 master-0 kubenswrapper[7926]: I0216 21:05:30.163985 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:31.099573 master-0 kubenswrapper[7926]: I0216 21:05:31.099488 7926 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-xbd96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 21:05:31.099835 master-0 kubenswrapper[7926]: I0216 21:05:31.099602 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" podUID="59237aa6-6250-4619-8ee5-abae59f04b57" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:05:32.494455 master-0 kubenswrapper[7926]: E0216 21:05:32.493585 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:05:33.101795 master-0 kubenswrapper[7926]: I0216 21:05:33.101704 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:05:33.146446 master-0 kubenswrapper[7926]: I0216 21:05:33.146407 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/3.log" Feb 16 21:05:33.146979 master-0 kubenswrapper[7926]: I0216 21:05:33.146951 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/2.log" Feb 16 21:05:33.147069 master-0 kubenswrapper[7926]: I0216 21:05:33.147010 7926 generic.go:334] "Generic (PLEG): container finished" podID="1b61063e-775e-421d-bf73-a6ef134293a0" containerID="98437a21e834f809a7d3a2fcc7ab7ac439c7d9370d526734b7d11f63840cb92d" exitCode=255 Feb 16 21:05:33.147114 master-0 kubenswrapper[7926]: I0216 21:05:33.147082 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerDied","Data":"98437a21e834f809a7d3a2fcc7ab7ac439c7d9370d526734b7d11f63840cb92d"} Feb 16 21:05:33.147192 master-0 kubenswrapper[7926]: I0216 21:05:33.147122 7926 scope.go:117] "RemoveContainer" containerID="c9124f9d5e41db03a56db8d08da400aa35fdd671c20974a9991273c405896bc3" Feb 16 21:05:33.148704 master-0 kubenswrapper[7926]: I0216 21:05:33.147851 7926 scope.go:117] "RemoveContainer" containerID="98437a21e834f809a7d3a2fcc7ab7ac439c7d9370d526734b7d11f63840cb92d" Feb 16 21:05:33.148704 master-0 kubenswrapper[7926]: E0216 21:05:33.148212 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=network-operator pod=network-operator-6fcf4c966-n4hfs_openshift-network-operator(1b61063e-775e-421d-bf73-a6ef134293a0)\"" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" podUID="1b61063e-775e-421d-bf73-a6ef134293a0" Feb 16 21:05:33.150311 master-0 kubenswrapper[7926]: I0216 21:05:33.149860 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4_319dc882-e1f5-40f9-99f4-2bae028337e5/packageserver/0.log" Feb 16 21:05:33.150311 master-0 kubenswrapper[7926]: I0216 21:05:33.149921 7926 generic.go:334] "Generic (PLEG): container finished" podID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerID="203b091a662b4912838a798e07794a8caa755508028a6b4fa5f1ef8b83de89af" exitCode=0 Feb 16 21:05:33.150311 master-0 kubenswrapper[7926]: I0216 21:05:33.150044 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" event={"ID":"319dc882-e1f5-40f9-99f4-2bae028337e5","Type":"ContainerDied","Data":"203b091a662b4912838a798e07794a8caa755508028a6b4fa5f1ef8b83de89af"} Feb 16 21:05:33.155150 master-0 kubenswrapper[7926]: I0216 21:05:33.155112 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-tvzdw_6b6be6de-6fcc-4f57-b163-fe8f970a01a4/openshift-apiserver-operator/3.log" Feb 16 21:05:33.156074 master-0 kubenswrapper[7926]: I0216 21:05:33.155624 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-tvzdw_6b6be6de-6fcc-4f57-b163-fe8f970a01a4/openshift-apiserver-operator/2.log" Feb 16 21:05:33.156074 master-0 kubenswrapper[7926]: I0216 21:05:33.155685 7926 generic.go:334] "Generic (PLEG): container finished" podID="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" containerID="d0e5f8a907c4851af3bce655e141083b0f633fdfa41c5abacbb48a7df33f9e94" exitCode=255 Feb 16 21:05:33.156074 master-0 kubenswrapper[7926]: I0216 21:05:33.155743 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" event={"ID":"6b6be6de-6fcc-4f57-b163-fe8f970a01a4","Type":"ContainerDied","Data":"d0e5f8a907c4851af3bce655e141083b0f633fdfa41c5abacbb48a7df33f9e94"} Feb 16 21:05:33.156307 master-0 kubenswrapper[7926]: I0216 21:05:33.156224 7926 scope.go:117] "RemoveContainer" containerID="d0e5f8a907c4851af3bce655e141083b0f633fdfa41c5abacbb48a7df33f9e94" Feb 16 21:05:33.156495 master-0 kubenswrapper[7926]: E0216 21:05:33.156456 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-6d4655d9cf-tvzdw_openshift-apiserver-operator(6b6be6de-6fcc-4f57-b163-fe8f970a01a4)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" podUID="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" Feb 16 21:05:33.159993 master-0 kubenswrapper[7926]: I0216 21:05:33.158340 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/3.log" Feb 16 21:05:33.159993 master-0 kubenswrapper[7926]: I0216 21:05:33.158770 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/2.log" Feb 16 21:05:33.159993 master-0 kubenswrapper[7926]: I0216 21:05:33.158800 7926 generic.go:334] "Generic (PLEG): container finished" podID="0b02b740-5698-4e9a-90fe-2873bd0b0958" containerID="9aebe89f00ace7757c9f12dc1f4359a915f84e8eb395e1cdeae0962c4475a4af" exitCode=255 Feb 16 21:05:33.159993 master-0 kubenswrapper[7926]: I0216 21:05:33.158847 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerDied","Data":"9aebe89f00ace7757c9f12dc1f4359a915f84e8eb395e1cdeae0962c4475a4af"} Feb 16 21:05:33.159993 master-0 kubenswrapper[7926]: I0216 21:05:33.159163 7926 scope.go:117] "RemoveContainer" containerID="9aebe89f00ace7757c9f12dc1f4359a915f84e8eb395e1cdeae0962c4475a4af" Feb 16 21:05:33.159993 master-0 kubenswrapper[7926]: E0216 21:05:33.159373 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-cl5ld_openshift-kube-apiserver-operator(0b02b740-5698-4e9a-90fe-2873bd0b0958)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" podUID="0b02b740-5698-4e9a-90fe-2873bd0b0958" Feb 16 21:05:33.160962 master-0 kubenswrapper[7926]: I0216 21:05:33.160919 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-xzww8_e7adbe32-b8b9-438e-a2e3-f93146a97424/kube-scheduler-operator-container/3.log" Feb 16 21:05:33.161471 master-0 kubenswrapper[7926]: I0216 21:05:33.161421 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-xzww8_e7adbe32-b8b9-438e-a2e3-f93146a97424/kube-scheduler-operator-container/2.log" Feb 16 21:05:33.161545 master-0 kubenswrapper[7926]: I0216 21:05:33.161499 7926 generic.go:334] "Generic (PLEG): container finished" podID="e7adbe32-b8b9-438e-a2e3-f93146a97424" containerID="6a7d7b13e17869969e9d31d79faa72dfb3a8d8453f67a2323e3dc0a1300a1e65" exitCode=255 Feb 16 21:05:33.161623 master-0 kubenswrapper[7926]: I0216 21:05:33.161588 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" event={"ID":"e7adbe32-b8b9-438e-a2e3-f93146a97424","Type":"ContainerDied","Data":"6a7d7b13e17869969e9d31d79faa72dfb3a8d8453f67a2323e3dc0a1300a1e65"} Feb 16 21:05:33.161982 master-0 kubenswrapper[7926]: I0216 21:05:33.161921 7926 scope.go:117] "RemoveContainer" containerID="6a7d7b13e17869969e9d31d79faa72dfb3a8d8453f67a2323e3dc0a1300a1e65" Feb 16 21:05:33.162161 master-0 kubenswrapper[7926]: E0216 21:05:33.162128 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-7485d55966-xzww8_openshift-kube-scheduler-operator(e7adbe32-b8b9-438e-a2e3-f93146a97424)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" podUID="e7adbe32-b8b9-438e-a2e3-f93146a97424" Feb 16 21:05:33.163813 master-0 kubenswrapper[7926]: I0216 21:05:33.163778 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/4.log" Feb 16 21:05:33.165817 master-0 kubenswrapper[7926]: I0216 21:05:33.165793 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/3.log" Feb 16 21:05:33.165817 master-0 kubenswrapper[7926]: I0216 21:05:33.165834 7926 generic.go:334] "Generic (PLEG): container finished" podID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerID="8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954" exitCode=255 Feb 16 21:05:33.165999 master-0 kubenswrapper[7926]: I0216 21:05:33.165899 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerDied","Data":"8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954"} Feb 16 21:05:33.166699 master-0 kubenswrapper[7926]: I0216 21:05:33.166506 7926 scope.go:117] "RemoveContainer" containerID="8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954" Feb 16 21:05:33.167396 master-0 kubenswrapper[7926]: E0216 21:05:33.167334 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:05:33.168390 master-0 kubenswrapper[7926]: I0216 21:05:33.168355 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/5.log" Feb 16 21:05:33.168904 master-0 kubenswrapper[7926]: I0216 21:05:33.168881 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/4.log" Feb 16 21:05:33.168973 master-0 kubenswrapper[7926]: I0216 21:05:33.168920 7926 generic.go:334] "Generic (PLEG): container finished" podID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerID="1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32" exitCode=255 Feb 16 21:05:33.168973 master-0 kubenswrapper[7926]: I0216 21:05:33.168965 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerDied","Data":"1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32"} Feb 16 21:05:33.169288 master-0 kubenswrapper[7926]: I0216 21:05:33.169252 7926 scope.go:117] "RemoveContainer" containerID="1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32" Feb 16 21:05:33.169474 master-0 kubenswrapper[7926]: E0216 21:05:33.169442 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:05:33.171450 master-0 kubenswrapper[7926]: I0216 21:05:33.171399 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/4.log" Feb 16 21:05:33.172061 master-0 kubenswrapper[7926]: I0216 21:05:33.172026 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/3.log" Feb 16 21:05:33.172183 master-0 kubenswrapper[7926]: I0216 21:05:33.172103 7926 generic.go:334] "Generic (PLEG): container finished" podID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" containerID="08b199e651bbf31337e0e421513ddb4e42db3e1be0a3d07452f74ea9c1f46046" exitCode=255 Feb 16 21:05:33.172280 master-0 kubenswrapper[7926]: I0216 21:05:33.172194 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerDied","Data":"08b199e651bbf31337e0e421513ddb4e42db3e1be0a3d07452f74ea9c1f46046"} Feb 16 21:05:33.172856 master-0 kubenswrapper[7926]: I0216 21:05:33.172806 7926 scope.go:117] "RemoveContainer" containerID="08b199e651bbf31337e0e421513ddb4e42db3e1be0a3d07452f74ea9c1f46046" Feb 16 21:05:33.173172 master-0 kubenswrapper[7926]: E0216 21:05:33.173139 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-56v4p_openshift-kube-storage-version-migrator-operator(c7333319-3fe6-4b3f-b600-6b6df49fcaff)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" podUID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" Feb 16 21:05:33.174106 master-0 kubenswrapper[7926]: I0216 21:05:33.174073 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-q5vjl_2ab0a907-7abe-4808-ba21-bdda1506eae2/service-ca-operator/3.log" Feb 16 21:05:33.174665 master-0 kubenswrapper[7926]: I0216 21:05:33.174625 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-q5vjl_2ab0a907-7abe-4808-ba21-bdda1506eae2/service-ca-operator/2.log" Feb 16 21:05:33.174731 master-0 kubenswrapper[7926]: I0216 21:05:33.174711 7926 generic.go:334] "Generic (PLEG): container finished" podID="2ab0a907-7abe-4808-ba21-bdda1506eae2" containerID="715050d13195531641370ad04c7754b8cef8bb72e0896de25aaafb35a02054c9" exitCode=255 Feb 16 21:05:33.174807 master-0 kubenswrapper[7926]: I0216 21:05:33.174777 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" event={"ID":"2ab0a907-7abe-4808-ba21-bdda1506eae2","Type":"ContainerDied","Data":"715050d13195531641370ad04c7754b8cef8bb72e0896de25aaafb35a02054c9"} Feb 16 21:05:33.175280 master-0 kubenswrapper[7926]: I0216 21:05:33.175237 7926 scope.go:117] "RemoveContainer" containerID="715050d13195531641370ad04c7754b8cef8bb72e0896de25aaafb35a02054c9" Feb 16 21:05:33.175575 master-0 kubenswrapper[7926]: E0216 21:05:33.175533 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-5dc4688546-q5vjl_openshift-service-ca-operator(2ab0a907-7abe-4808-ba21-bdda1506eae2)\"" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" podUID="2ab0a907-7abe-4808-ba21-bdda1506eae2" Feb 16 21:05:33.177850 master-0 kubenswrapper[7926]: I0216 21:05:33.177814 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-v7xdv_4085413c-9af1-4d2a-ba0f-33b42025cb7f/csi-snapshot-controller-operator/2.log" Feb 16 21:05:33.178346 master-0 kubenswrapper[7926]: I0216 21:05:33.178316 7926 generic.go:334] "Generic (PLEG): container finished" podID="4085413c-9af1-4d2a-ba0f-33b42025cb7f" containerID="5bb447e9b562fe2a3fcb45b723cffb38257ea64157f142954fe58414909efdd3" exitCode=255 Feb 16 21:05:33.178448 master-0 kubenswrapper[7926]: I0216 21:05:33.178401 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" event={"ID":"4085413c-9af1-4d2a-ba0f-33b42025cb7f","Type":"ContainerDied","Data":"5bb447e9b562fe2a3fcb45b723cffb38257ea64157f142954fe58414909efdd3"} Feb 16 21:05:33.178901 master-0 kubenswrapper[7926]: I0216 21:05:33.178860 7926 scope.go:117] "RemoveContainer" containerID="5bb447e9b562fe2a3fcb45b723cffb38257ea64157f142954fe58414909efdd3" Feb 16 21:05:33.179071 master-0 kubenswrapper[7926]: E0216 21:05:33.179033 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=csi-snapshot-controller-operator pod=csi-snapshot-controller-operator-7b87b97578-v7xdv_openshift-cluster-storage-operator(4085413c-9af1-4d2a-ba0f-33b42025cb7f)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" podUID="4085413c-9af1-4d2a-ba0f-33b42025cb7f" Feb 16 21:05:33.181674 master-0 kubenswrapper[7926]: I0216 21:05:33.181617 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/4.log" Feb 16 21:05:33.182137 master-0 kubenswrapper[7926]: I0216 21:05:33.182111 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/3.log" Feb 16 21:05:33.182199 master-0 kubenswrapper[7926]: I0216 21:05:33.182154 7926 generic.go:334] "Generic (PLEG): container finished" podID="695549c8-d1fc-429d-9c9f-0a5915dc6074" containerID="abce7c467580f27265b653bd89f53e6e0d6413f3687b039b9f58c8dd18d3f0ce" exitCode=255 Feb 16 21:05:33.182239 master-0 kubenswrapper[7926]: I0216 21:05:33.182207 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerDied","Data":"abce7c467580f27265b653bd89f53e6e0d6413f3687b039b9f58c8dd18d3f0ce"} Feb 16 21:05:33.182557 master-0 kubenswrapper[7926]: I0216 21:05:33.182523 7926 scope.go:117] "RemoveContainer" containerID="abce7c467580f27265b653bd89f53e6e0d6413f3687b039b9f58c8dd18d3f0ce" Feb 16 21:05:33.182769 master-0 kubenswrapper[7926]: E0216 21:05:33.182734 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-5f5f84757d-k42w9_openshift-controller-manager-operator(695549c8-d1fc-429d-9c9f-0a5915dc6074)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" podUID="695549c8-d1fc-429d-9c9f-0a5915dc6074" Feb 16 21:05:33.189284 master-0 kubenswrapper[7926]: I0216 21:05:33.189238 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/4.log" Feb 16 21:05:33.189817 master-0 kubenswrapper[7926]: I0216 21:05:33.189788 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/3.log" Feb 16 21:05:33.189895 master-0 kubenswrapper[7926]: I0216 21:05:33.189838 7926 generic.go:334] "Generic (PLEG): container finished" podID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" containerID="35ed53f7c30fa9921f8cd975c0172c21b8f110abc5d358e84c90a7ea7b1226a7" exitCode=255 Feb 16 21:05:33.189989 master-0 kubenswrapper[7926]: I0216 21:05:33.189930 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerDied","Data":"35ed53f7c30fa9921f8cd975c0172c21b8f110abc5d358e84c90a7ea7b1226a7"} Feb 16 21:05:33.190592 master-0 kubenswrapper[7926]: I0216 21:05:33.190566 7926 scope.go:117] "RemoveContainer" containerID="35ed53f7c30fa9921f8cd975c0172c21b8f110abc5d358e84c90a7ea7b1226a7" Feb 16 21:05:33.190913 master-0 kubenswrapper[7926]: E0216 21:05:33.190854 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-7p9ft_openshift-kube-controller-manager-operator(7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" podUID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" Feb 16 21:05:33.192068 master-0 kubenswrapper[7926]: I0216 21:05:33.192010 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-676cd8b9b5-cbj2r_99ab949e-bd0d-45a7-95d1-8381d9f1f5f3/service-ca-controller/1.log" Feb 16 21:05:33.192355 master-0 kubenswrapper[7926]: I0216 21:05:33.192333 7926 generic.go:334] "Generic (PLEG): container finished" podID="99ab949e-bd0d-45a7-95d1-8381d9f1f5f3" containerID="11a0f236b15a97d8bb8db30a3ecfba40559eb738b2fbad78fcc9824a0ec8620e" exitCode=255 Feb 16 21:05:33.192457 master-0 kubenswrapper[7926]: I0216 21:05:33.192400 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" event={"ID":"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3","Type":"ContainerDied","Data":"11a0f236b15a97d8bb8db30a3ecfba40559eb738b2fbad78fcc9824a0ec8620e"} Feb 16 21:05:33.192893 master-0 kubenswrapper[7926]: I0216 21:05:33.192873 7926 scope.go:117] "RemoveContainer" containerID="11a0f236b15a97d8bb8db30a3ecfba40559eb738b2fbad78fcc9824a0ec8620e" Feb 16 21:05:33.193069 master-0 kubenswrapper[7926]: E0216 21:05:33.193052 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=service-ca-controller pod=service-ca-676cd8b9b5-cbj2r_openshift-service-ca(99ab949e-bd0d-45a7-95d1-8381d9f1f5f3)\"" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" podUID="99ab949e-bd0d-45a7-95d1-8381d9f1f5f3" Feb 16 21:05:33.194043 master-0 kubenswrapper[7926]: I0216 21:05:33.194001 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-pdjn4_5e062e07-8076-444c-b476-4eb2848e9613/cluster-olm-operator/2.log" Feb 16 21:05:33.197050 master-0 kubenswrapper[7926]: I0216 21:05:33.196055 7926 generic.go:334] "Generic (PLEG): container finished" podID="5e062e07-8076-444c-b476-4eb2848e9613" containerID="b805375f7b42f31b0863c18246ff6bd98c4c77aa1ad1eb2b469a42772d48301d" exitCode=255 Feb 16 21:05:33.197050 master-0 kubenswrapper[7926]: I0216 21:05:33.196201 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerDied","Data":"b805375f7b42f31b0863c18246ff6bd98c4c77aa1ad1eb2b469a42772d48301d"} Feb 16 21:05:33.197050 master-0 kubenswrapper[7926]: I0216 21:05:33.196816 7926 scope.go:117] "RemoveContainer" containerID="b805375f7b42f31b0863c18246ff6bd98c4c77aa1ad1eb2b469a42772d48301d" Feb 16 21:05:33.197250 master-0 kubenswrapper[7926]: E0216 21:05:33.197102 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-olm-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-olm-operator pod=cluster-olm-operator-55b69c6c48-pdjn4_openshift-cluster-olm-operator(5e062e07-8076-444c-b476-4eb2848e9613)\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" podUID="5e062e07-8076-444c-b476-4eb2848e9613" Feb 16 21:05:33.198092 master-0 kubenswrapper[7926]: I0216 21:05:33.198031 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/2.log" Feb 16 21:05:33.198938 master-0 kubenswrapper[7926]: I0216 21:05:33.198896 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/1.log" Feb 16 21:05:33.199024 master-0 kubenswrapper[7926]: I0216 21:05:33.198968 7926 generic.go:334] "Generic (PLEG): container finished" podID="aa2e9bbc-3962-45f5-a7cc-2dc059409e70" containerID="86b2625e01e86e20ad843cc517b662e8d0574773dfe24c22fbbf50abc8c0ea7f" exitCode=255 Feb 16 21:05:33.199024 master-0 kubenswrapper[7926]: I0216 21:05:33.199010 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" event={"ID":"aa2e9bbc-3962-45f5-a7cc-2dc059409e70","Type":"ContainerDied","Data":"86b2625e01e86e20ad843cc517b662e8d0574773dfe24c22fbbf50abc8c0ea7f"} Feb 16 21:05:33.199515 master-0 kubenswrapper[7926]: I0216 21:05:33.199479 7926 scope.go:117] "RemoveContainer" containerID="86b2625e01e86e20ad843cc517b662e8d0574773dfe24c22fbbf50abc8c0ea7f" Feb 16 21:05:33.199903 master-0 kubenswrapper[7926]: E0216 21:05:33.199852 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-storage-operator pod=cluster-storage-operator-75b869db96-g4w5m_openshift-cluster-storage-operator(aa2e9bbc-3962-45f5-a7cc-2dc059409e70)\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" podUID="aa2e9bbc-3962-45f5-a7cc-2dc059409e70" Feb 16 21:05:33.392080 master-0 kubenswrapper[7926]: I0216 21:05:33.392021 7926 scope.go:117] "RemoveContainer" containerID="70c8a58b1f436ad8ca4d491de1284ed96c1d17dc7c8758f9d265ebf6a6d73a38" Feb 16 21:05:33.419991 master-0 kubenswrapper[7926]: I0216 21:05:33.419684 7926 scope.go:117] "RemoveContainer" containerID="fe90aa9198533517faa6871ececff317856fe5ccb78abe5de0ace1b89b25d9f3" Feb 16 21:05:33.448072 master-0 kubenswrapper[7926]: I0216 21:05:33.448017 7926 scope.go:117] "RemoveContainer" containerID="467db04b7bff5a3b4be9912b3821541f7f7357f38d787b4e261ea72ceb3d15af" Feb 16 21:05:33.468562 master-0 kubenswrapper[7926]: I0216 21:05:33.468476 7926 scope.go:117] "RemoveContainer" containerID="b14701382aa95b48c51ea29fa658b5538f88b2a7a4c18fcdfc110d59ae2c79fe" Feb 16 21:05:33.503532 master-0 kubenswrapper[7926]: I0216 21:05:33.503307 7926 scope.go:117] "RemoveContainer" containerID="9b515d5a7a3620fef9281bf66e2c25d3ec90a1c70a0a5cb2470f5419d26f7741" Feb 16 21:05:33.526533 master-0 kubenswrapper[7926]: I0216 21:05:33.526399 7926 scope.go:117] "RemoveContainer" containerID="bae2526e4dde061e6c7a8ef722773dcd93504e4ed1b17f4a15386f5a7579875d" Feb 16 21:05:33.542138 master-0 kubenswrapper[7926]: I0216 21:05:33.541981 7926 scope.go:117] "RemoveContainer" containerID="121dab1fc95eacb58da984bcdc1166fb24200dd1db3a8ef3613a520edb17c265" Feb 16 21:05:33.567487 master-0 kubenswrapper[7926]: I0216 21:05:33.567381 7926 scope.go:117] "RemoveContainer" containerID="a4e5e42cc4ff83859a8656b165ef7357fe4b7dff02702e6e7921002edc0c6d8d" Feb 16 21:05:33.591294 master-0 kubenswrapper[7926]: I0216 21:05:33.591159 7926 scope.go:117] "RemoveContainer" containerID="ada24a94e3cdaddc38a62024529752b29e1359c42e86c75ebaa514d784cc3fe9" Feb 16 21:05:33.613451 master-0 kubenswrapper[7926]: I0216 21:05:33.613334 7926 scope.go:117] "RemoveContainer" containerID="5652867e32787e74c02e3d9d28965d504ee7ff6f2fcb9263e330c08c917ac73f" Feb 16 21:05:33.631290 master-0 kubenswrapper[7926]: I0216 21:05:33.631248 7926 scope.go:117] "RemoveContainer" containerID="63ebdf0c0200865a719bef6bf6aea428a6aed5c1b2a14851e05503627b70b2a7" Feb 16 21:05:33.652577 master-0 kubenswrapper[7926]: I0216 21:05:33.652555 7926 scope.go:117] "RemoveContainer" containerID="0c4056212013eaff1f5d405532bbe8e1791cff62d95615157652d9167450664a" Feb 16 21:05:33.675332 master-0 kubenswrapper[7926]: I0216 21:05:33.675274 7926 scope.go:117] "RemoveContainer" containerID="8d6fd2d30a1b00edfb997113793ad55fbf5dca8c4b949fed22018dbb444c09ad" Feb 16 21:05:33.695880 master-0 kubenswrapper[7926]: I0216 21:05:33.695802 7926 scope.go:117] "RemoveContainer" containerID="d95fdd7082b515ac47df4c4e5100db16158ab71c4fe74d4f5e87ded21ddfd407" Feb 16 21:05:33.966157 master-0 kubenswrapper[7926]: I0216 21:05:33.965838 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:05:34.212199 master-0 kubenswrapper[7926]: I0216 21:05:34.212129 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-v7xdv_4085413c-9af1-4d2a-ba0f-33b42025cb7f/csi-snapshot-controller-operator/2.log" Feb 16 21:05:34.215417 master-0 kubenswrapper[7926]: I0216 21:05:34.215304 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-pdjn4_5e062e07-8076-444c-b476-4eb2848e9613/cluster-olm-operator/2.log" Feb 16 21:05:34.219465 master-0 kubenswrapper[7926]: I0216 21:05:34.219325 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/4.log" Feb 16 21:05:34.221250 master-0 kubenswrapper[7926]: I0216 21:05:34.221193 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-tvzdw_6b6be6de-6fcc-4f57-b163-fe8f970a01a4/openshift-apiserver-operator/3.log" Feb 16 21:05:34.223734 master-0 kubenswrapper[7926]: I0216 21:05:34.223689 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/3.log" Feb 16 21:05:34.225447 master-0 kubenswrapper[7926]: I0216 21:05:34.225394 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-xzww8_e7adbe32-b8b9-438e-a2e3-f93146a97424/kube-scheduler-operator-container/3.log" Feb 16 21:05:34.228037 master-0 kubenswrapper[7926]: I0216 21:05:34.227975 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" event={"ID":"319dc882-e1f5-40f9-99f4-2bae028337e5","Type":"ContainerStarted","Data":"b014fde00d656d88f73bc5afec71e6ac7dc4f1b7fdabe71571471749b0f80f22"} Feb 16 21:05:34.228563 master-0 kubenswrapper[7926]: I0216 21:05:34.228488 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:05:34.230408 master-0 kubenswrapper[7926]: I0216 21:05:34.230369 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/3.log" Feb 16 21:05:34.233586 master-0 kubenswrapper[7926]: I0216 21:05:34.233522 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-676cd8b9b5-cbj2r_99ab949e-bd0d-45a7-95d1-8381d9f1f5f3/service-ca-controller/1.log" Feb 16 21:05:34.233747 master-0 kubenswrapper[7926]: I0216 21:05:34.233552 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:05:34.235741 master-0 kubenswrapper[7926]: I0216 21:05:34.235704 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/2.log" Feb 16 21:05:34.237926 master-0 kubenswrapper[7926]: I0216 21:05:34.237887 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/4.log" Feb 16 21:05:34.239960 master-0 kubenswrapper[7926]: I0216 21:05:34.239898 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/5.log" Feb 16 21:05:34.242299 master-0 kubenswrapper[7926]: I0216 21:05:34.242247 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-q5vjl_2ab0a907-7abe-4808-ba21-bdda1506eae2/service-ca-operator/3.log" Feb 16 21:05:34.246203 master-0 kubenswrapper[7926]: I0216 21:05:34.246063 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/4.log" Feb 16 21:05:34.248889 master-0 kubenswrapper[7926]: I0216 21:05:34.248846 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/4.log" Feb 16 21:05:34.320836 master-0 kubenswrapper[7926]: I0216 21:05:34.320765 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:05:34.321582 master-0 kubenswrapper[7926]: I0216 21:05:34.321536 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:05:34.322048 master-0 kubenswrapper[7926]: E0216 21:05:34.321997 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:05:34.327287 master-0 kubenswrapper[7926]: I0216 21:05:34.327191 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:05:35.255412 master-0 kubenswrapper[7926]: I0216 21:05:35.255341 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:05:35.256287 master-0 kubenswrapper[7926]: E0216 21:05:35.255579 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:05:36.109635 master-0 kubenswrapper[7926]: I0216 21:05:36.109539 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhh2p"] Feb 16 21:05:36.109963 master-0 kubenswrapper[7926]: I0216 21:05:36.109906 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dhh2p" podUID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerName="registry-server" containerID="cri-o://17cb30ab353a8c5e6ca279c7628b3d05fccf6b6666e6fe10a816ce650b15966b" gracePeriod=2 Feb 16 21:05:36.120900 master-0 kubenswrapper[7926]: E0216 21:05:36.120828 7926 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cb30ab353a8c5e6ca279c7628b3d05fccf6b6666e6fe10a816ce650b15966b" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 21:05:36.121934 master-0 kubenswrapper[7926]: E0216 21:05:36.121845 7926 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cb30ab353a8c5e6ca279c7628b3d05fccf6b6666e6fe10a816ce650b15966b" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 21:05:36.123729 master-0 kubenswrapper[7926]: E0216 21:05:36.123596 7926 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17cb30ab353a8c5e6ca279c7628b3d05fccf6b6666e6fe10a816ce650b15966b" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 21:05:36.123809 master-0 kubenswrapper[7926]: E0216 21:05:36.123748 7926 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-operators-dhh2p" podUID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerName="registry-server" Feb 16 21:05:36.263052 master-0 kubenswrapper[7926]: I0216 21:05:36.262972 7926 generic.go:334] "Generic (PLEG): container finished" podID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerID="17cb30ab353a8c5e6ca279c7628b3d05fccf6b6666e6fe10a816ce650b15966b" exitCode=0 Feb 16 21:05:36.263052 master-0 kubenswrapper[7926]: I0216 21:05:36.263047 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhh2p" event={"ID":"9566b108-44e1-4d9e-8984-4c396dc4408c","Type":"ContainerDied","Data":"17cb30ab353a8c5e6ca279c7628b3d05fccf6b6666e6fe10a816ce650b15966b"} Feb 16 21:05:36.526099 master-0 kubenswrapper[7926]: I0216 21:05:36.526054 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-69wj8"] Feb 16 21:05:36.526294 master-0 kubenswrapper[7926]: E0216 21:05:36.526259 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d416d98-ee7c-4481-9721-861ccd91685d" containerName="installer" Feb 16 21:05:36.526294 master-0 kubenswrapper[7926]: I0216 21:05:36.526271 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d416d98-ee7c-4481-9721-861ccd91685d" containerName="installer" Feb 16 21:05:36.526294 master-0 kubenswrapper[7926]: E0216 21:05:36.526285 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09d3c16-18e3-45b3-9d39-949d2464b300" containerName="installer" Feb 16 21:05:36.526294 master-0 kubenswrapper[7926]: I0216 21:05:36.526291 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09d3c16-18e3-45b3-9d39-949d2464b300" containerName="installer" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: E0216 21:05:36.526318 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ec2c8c-e32c-4d18-ad78-0ef1f19557af" containerName="extract-utilities" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: I0216 21:05:36.526326 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ec2c8c-e32c-4d18-ad78-0ef1f19557af" containerName="extract-utilities" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: E0216 21:05:36.526370 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965" containerName="installer" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: I0216 21:05:36.526377 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965" containerName="installer" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: E0216 21:05:36.526391 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a6dcba-776f-48ba-b824-90ed5ae3abee" containerName="extract-utilities" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: I0216 21:05:36.526397 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a6dcba-776f-48ba-b824-90ed5ae3abee" containerName="extract-utilities" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: E0216 21:05:36.526413 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a6dcba-776f-48ba-b824-90ed5ae3abee" containerName="extract-content" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: I0216 21:05:36.526419 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a6dcba-776f-48ba-b824-90ed5ae3abee" containerName="extract-content" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: E0216 21:05:36.526431 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a29a1022-5f54-49a2-99f6-d19eb2773890" containerName="installer" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: I0216 21:05:36.526437 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="a29a1022-5f54-49a2-99f6-d19eb2773890" containerName="installer" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: E0216 21:05:36.526452 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ec2c8c-e32c-4d18-ad78-0ef1f19557af" containerName="extract-content" Feb 16 21:05:36.526466 master-0 kubenswrapper[7926]: I0216 21:05:36.526459 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ec2c8c-e32c-4d18-ad78-0ef1f19557af" containerName="extract-content" Feb 16 21:05:36.526910 master-0 kubenswrapper[7926]: I0216 21:05:36.526550 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a6dcba-776f-48ba-b824-90ed5ae3abee" containerName="extract-content" Feb 16 21:05:36.526910 master-0 kubenswrapper[7926]: I0216 21:05:36.526562 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="b09d3c16-18e3-45b3-9d39-949d2464b300" containerName="installer" Feb 16 21:05:36.526910 master-0 kubenswrapper[7926]: I0216 21:05:36.526575 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d416d98-ee7c-4481-9721-861ccd91685d" containerName="installer" Feb 16 21:05:36.526910 master-0 kubenswrapper[7926]: I0216 21:05:36.526590 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ec2c8c-e32c-4d18-ad78-0ef1f19557af" containerName="extract-content" Feb 16 21:05:36.526910 master-0 kubenswrapper[7926]: I0216 21:05:36.526602 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965" containerName="installer" Feb 16 21:05:36.526910 master-0 kubenswrapper[7926]: I0216 21:05:36.526613 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="a29a1022-5f54-49a2-99f6-d19eb2773890" containerName="installer" Feb 16 21:05:36.527683 master-0 kubenswrapper[7926]: I0216 21:05:36.527661 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:36.529827 master-0 kubenswrapper[7926]: I0216 21:05:36.529798 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-knxzz" Feb 16 21:05:36.539690 master-0 kubenswrapper[7926]: I0216 21:05:36.539593 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-69wj8"] Feb 16 21:05:36.576154 master-0 kubenswrapper[7926]: I0216 21:05:36.576080 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-utilities\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:36.576154 master-0 kubenswrapper[7926]: I0216 21:05:36.576159 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq9c5\" (UniqueName: \"kubernetes.io/projected/d8d648c7-b84b-4f43-84c9-903aead0891a-kube-api-access-nq9c5\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:36.576572 master-0 kubenswrapper[7926]: I0216 21:05:36.576187 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-catalog-content\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:36.578962 master-0 kubenswrapper[7926]: I0216 21:05:36.578863 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 21:05:36.677224 master-0 kubenswrapper[7926]: I0216 21:05:36.677158 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq9c5\" (UniqueName: \"kubernetes.io/projected/d8d648c7-b84b-4f43-84c9-903aead0891a-kube-api-access-nq9c5\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:36.677224 master-0 kubenswrapper[7926]: I0216 21:05:36.677224 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-catalog-content\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:36.677454 master-0 kubenswrapper[7926]: I0216 21:05:36.677273 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-utilities\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:36.677802 master-0 kubenswrapper[7926]: I0216 21:05:36.677731 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-catalog-content\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:36.678035 master-0 kubenswrapper[7926]: I0216 21:05:36.678006 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-utilities\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:36.708022 master-0 kubenswrapper[7926]: I0216 21:05:36.707948 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b8vtc"] Feb 16 21:05:36.708274 master-0 kubenswrapper[7926]: I0216 21:05:36.708218 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b8vtc" podUID="03593410-baa5-4edb-9d73-242a74f82987" containerName="registry-server" containerID="cri-o://8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d" gracePeriod=2 Feb 16 21:05:36.709686 master-0 kubenswrapper[7926]: I0216 21:05:36.709627 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq9c5\" (UniqueName: \"kubernetes.io/projected/d8d648c7-b84b-4f43-84c9-903aead0891a-kube-api-access-nq9c5\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:36.779065 master-0 kubenswrapper[7926]: I0216 21:05:36.778954 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-utilities\") pod \"9566b108-44e1-4d9e-8984-4c396dc4408c\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " Feb 16 21:05:36.779306 master-0 kubenswrapper[7926]: I0216 21:05:36.779140 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j92dc\" (UniqueName: \"kubernetes.io/projected/9566b108-44e1-4d9e-8984-4c396dc4408c-kube-api-access-j92dc\") pod \"9566b108-44e1-4d9e-8984-4c396dc4408c\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " Feb 16 21:05:36.779306 master-0 kubenswrapper[7926]: I0216 21:05:36.779213 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-catalog-content\") pod \"9566b108-44e1-4d9e-8984-4c396dc4408c\" (UID: \"9566b108-44e1-4d9e-8984-4c396dc4408c\") " Feb 16 21:05:36.779961 master-0 kubenswrapper[7926]: I0216 21:05:36.779900 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-utilities" (OuterVolumeSpecName: "utilities") pod "9566b108-44e1-4d9e-8984-4c396dc4408c" (UID: "9566b108-44e1-4d9e-8984-4c396dc4408c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:05:36.782463 master-0 kubenswrapper[7926]: I0216 21:05:36.781790 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9566b108-44e1-4d9e-8984-4c396dc4408c-kube-api-access-j92dc" (OuterVolumeSpecName: "kube-api-access-j92dc") pod "9566b108-44e1-4d9e-8984-4c396dc4408c" (UID: "9566b108-44e1-4d9e-8984-4c396dc4408c"). InnerVolumeSpecName "kube-api-access-j92dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:05:36.881051 master-0 kubenswrapper[7926]: I0216 21:05:36.880786 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j92dc\" (UniqueName: \"kubernetes.io/projected/9566b108-44e1-4d9e-8984-4c396dc4408c-kube-api-access-j92dc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:05:36.881051 master-0 kubenswrapper[7926]: I0216 21:05:36.880859 7926 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-utilities\") on node \"master-0\" DevicePath \"\"" Feb 16 21:05:36.897888 master-0 kubenswrapper[7926]: I0216 21:05:36.896184 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:36.938629 master-0 kubenswrapper[7926]: I0216 21:05:36.938558 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9566b108-44e1-4d9e-8984-4c396dc4408c" (UID: "9566b108-44e1-4d9e-8984-4c396dc4408c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:05:36.981412 master-0 kubenswrapper[7926]: I0216 21:05:36.981359 7926 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9566b108-44e1-4d9e-8984-4c396dc4408c-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 16 21:05:37.094156 master-0 kubenswrapper[7926]: I0216 21:05:37.094096 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:05:37.095770 master-0 kubenswrapper[7926]: I0216 21:05:37.095738 7926 scope.go:117] "RemoveContainer" containerID="8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954" Feb 16 21:05:37.096027 master-0 kubenswrapper[7926]: E0216 21:05:37.095991 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:05:37.122773 master-0 kubenswrapper[7926]: I0216 21:05:37.122678 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-blw8x"] Feb 16 21:05:37.123220 master-0 kubenswrapper[7926]: E0216 21:05:37.123183 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerName="extract-content" Feb 16 21:05:37.123220 master-0 kubenswrapper[7926]: I0216 21:05:37.123213 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerName="extract-content" Feb 16 21:05:37.123314 master-0 kubenswrapper[7926]: E0216 21:05:37.123257 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerName="registry-server" Feb 16 21:05:37.123314 master-0 kubenswrapper[7926]: I0216 21:05:37.123267 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerName="registry-server" Feb 16 21:05:37.123314 master-0 kubenswrapper[7926]: E0216 21:05:37.123284 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerName="extract-utilities" Feb 16 21:05:37.123314 master-0 kubenswrapper[7926]: I0216 21:05:37.123293 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerName="extract-utilities" Feb 16 21:05:37.123444 master-0 kubenswrapper[7926]: I0216 21:05:37.123410 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="9566b108-44e1-4d9e-8984-4c396dc4408c" containerName="registry-server" Feb 16 21:05:37.124803 master-0 kubenswrapper[7926]: I0216 21:05:37.124773 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:37.126984 master-0 kubenswrapper[7926]: I0216 21:05:37.126928 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mz2hl" Feb 16 21:05:37.134343 master-0 kubenswrapper[7926]: I0216 21:05:37.134301 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-blw8x"] Feb 16 21:05:37.151132 master-0 kubenswrapper[7926]: I0216 21:05:37.151088 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 21:05:37.183366 master-0 kubenswrapper[7926]: I0216 21:05:37.183312 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-utilities\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:37.183366 master-0 kubenswrapper[7926]: I0216 21:05:37.183360 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-catalog-content\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:37.183635 master-0 kubenswrapper[7926]: I0216 21:05:37.183383 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqkgp\" (UniqueName: \"kubernetes.io/projected/853452fb-1035-4f22-8aeb-9043d150e8ca-kube-api-access-zqkgp\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:37.273424 master-0 kubenswrapper[7926]: I0216 21:05:37.273324 7926 generic.go:334] "Generic (PLEG): container finished" podID="03593410-baa5-4edb-9d73-242a74f82987" containerID="8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d" exitCode=0 Feb 16 21:05:37.273424 master-0 kubenswrapper[7926]: I0216 21:05:37.273413 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b8vtc" Feb 16 21:05:37.274010 master-0 kubenswrapper[7926]: I0216 21:05:37.273442 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8vtc" event={"ID":"03593410-baa5-4edb-9d73-242a74f82987","Type":"ContainerDied","Data":"8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d"} Feb 16 21:05:37.274010 master-0 kubenswrapper[7926]: I0216 21:05:37.273504 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b8vtc" event={"ID":"03593410-baa5-4edb-9d73-242a74f82987","Type":"ContainerDied","Data":"bcbf76c12e0a665429c3b7495c8c421337d9ebf01882b382cef96d39701094b1"} Feb 16 21:05:37.274010 master-0 kubenswrapper[7926]: I0216 21:05:37.273530 7926 scope.go:117] "RemoveContainer" containerID="8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d" Feb 16 21:05:37.280523 master-0 kubenswrapper[7926]: I0216 21:05:37.280476 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhh2p" event={"ID":"9566b108-44e1-4d9e-8984-4c396dc4408c","Type":"ContainerDied","Data":"3e97182ddf5896a5823851bd32b3058169dbc5cfba0d9d88f02cc81a737767a7"} Feb 16 21:05:37.280690 master-0 kubenswrapper[7926]: I0216 21:05:37.280573 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhh2p" Feb 16 21:05:37.285364 master-0 kubenswrapper[7926]: I0216 21:05:37.285272 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-utilities\") pod \"03593410-baa5-4edb-9d73-242a74f82987\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " Feb 16 21:05:37.285492 master-0 kubenswrapper[7926]: I0216 21:05:37.285383 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jppbr\" (UniqueName: \"kubernetes.io/projected/03593410-baa5-4edb-9d73-242a74f82987-kube-api-access-jppbr\") pod \"03593410-baa5-4edb-9d73-242a74f82987\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " Feb 16 21:05:37.285492 master-0 kubenswrapper[7926]: I0216 21:05:37.285463 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-catalog-content\") pod \"03593410-baa5-4edb-9d73-242a74f82987\" (UID: \"03593410-baa5-4edb-9d73-242a74f82987\") " Feb 16 21:05:37.285676 master-0 kubenswrapper[7926]: I0216 21:05:37.285634 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-utilities\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:37.285767 master-0 kubenswrapper[7926]: I0216 21:05:37.285686 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-catalog-content\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:37.285767 master-0 kubenswrapper[7926]: I0216 21:05:37.285708 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqkgp\" (UniqueName: \"kubernetes.io/projected/853452fb-1035-4f22-8aeb-9043d150e8ca-kube-api-access-zqkgp\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:37.286849 master-0 kubenswrapper[7926]: I0216 21:05:37.286801 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-utilities" (OuterVolumeSpecName: "utilities") pod "03593410-baa5-4edb-9d73-242a74f82987" (UID: "03593410-baa5-4edb-9d73-242a74f82987"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:05:37.287297 master-0 kubenswrapper[7926]: I0216 21:05:37.287251 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-catalog-content\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:37.287418 master-0 kubenswrapper[7926]: I0216 21:05:37.287264 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-utilities\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:37.290353 master-0 kubenswrapper[7926]: I0216 21:05:37.290308 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03593410-baa5-4edb-9d73-242a74f82987-kube-api-access-jppbr" (OuterVolumeSpecName: "kube-api-access-jppbr") pod "03593410-baa5-4edb-9d73-242a74f82987" (UID: "03593410-baa5-4edb-9d73-242a74f82987"). InnerVolumeSpecName "kube-api-access-jppbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:05:37.290708 master-0 kubenswrapper[7926]: I0216 21:05:37.290680 7926 scope.go:117] "RemoveContainer" containerID="b19589dbb6d4f7d3e5399c99620d53a3620f890047844d988256937f57f518e8" Feb 16 21:05:37.303933 master-0 kubenswrapper[7926]: I0216 21:05:37.303883 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqkgp\" (UniqueName: \"kubernetes.io/projected/853452fb-1035-4f22-8aeb-9043d150e8ca-kube-api-access-zqkgp\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:37.318731 master-0 kubenswrapper[7926]: I0216 21:05:37.318543 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-69wj8"] Feb 16 21:05:37.324822 master-0 kubenswrapper[7926]: W0216 21:05:37.324777 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8d648c7_b84b_4f43_84c9_903aead0891a.slice/crio-385456702c716ef5052af7ff4f8c1f6423867ff9037ec0352d3bef2843cc7641 WatchSource:0}: Error finding container 385456702c716ef5052af7ff4f8c1f6423867ff9037ec0352d3bef2843cc7641: Status 404 returned error can't find the container with id 385456702c716ef5052af7ff4f8c1f6423867ff9037ec0352d3bef2843cc7641 Feb 16 21:05:37.349907 master-0 kubenswrapper[7926]: I0216 21:05:37.349839 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03593410-baa5-4edb-9d73-242a74f82987" (UID: "03593410-baa5-4edb-9d73-242a74f82987"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:05:37.386350 master-0 kubenswrapper[7926]: I0216 21:05:37.386318 7926 scope.go:117] "RemoveContainer" containerID="df640a25b3ddb3199360ab01328f62e3d346f3e50e79a2d6fa8fbf82c9ea5172" Feb 16 21:05:37.387560 master-0 kubenswrapper[7926]: I0216 21:05:37.387452 7926 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 16 21:05:37.387668 master-0 kubenswrapper[7926]: I0216 21:05:37.387636 7926 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03593410-baa5-4edb-9d73-242a74f82987-utilities\") on node \"master-0\" DevicePath \"\"" Feb 16 21:05:37.387738 master-0 kubenswrapper[7926]: I0216 21:05:37.387727 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jppbr\" (UniqueName: \"kubernetes.io/projected/03593410-baa5-4edb-9d73-242a74f82987-kube-api-access-jppbr\") on node \"master-0\" DevicePath \"\"" Feb 16 21:05:37.410173 master-0 kubenswrapper[7926]: I0216 21:05:37.410150 7926 scope.go:117] "RemoveContainer" containerID="8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d" Feb 16 21:05:37.410780 master-0 kubenswrapper[7926]: E0216 21:05:37.410726 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d\": container with ID starting with 8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d not found: ID does not exist" containerID="8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d" Feb 16 21:05:37.410849 master-0 kubenswrapper[7926]: I0216 21:05:37.410785 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d"} err="failed to get container status \"8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d\": rpc error: code = NotFound desc = could not find container \"8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d\": container with ID starting with 8e0e50669492b5f9ec136f40683d2f5428911200fadad457035b839b19231f7d not found: ID does not exist" Feb 16 21:05:37.410849 master-0 kubenswrapper[7926]: I0216 21:05:37.410810 7926 scope.go:117] "RemoveContainer" containerID="b19589dbb6d4f7d3e5399c99620d53a3620f890047844d988256937f57f518e8" Feb 16 21:05:37.411207 master-0 kubenswrapper[7926]: E0216 21:05:37.411182 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b19589dbb6d4f7d3e5399c99620d53a3620f890047844d988256937f57f518e8\": container with ID starting with b19589dbb6d4f7d3e5399c99620d53a3620f890047844d988256937f57f518e8 not found: ID does not exist" containerID="b19589dbb6d4f7d3e5399c99620d53a3620f890047844d988256937f57f518e8" Feb 16 21:05:37.411313 master-0 kubenswrapper[7926]: I0216 21:05:37.411290 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b19589dbb6d4f7d3e5399c99620d53a3620f890047844d988256937f57f518e8"} err="failed to get container status \"b19589dbb6d4f7d3e5399c99620d53a3620f890047844d988256937f57f518e8\": rpc error: code = NotFound desc = could not find container \"b19589dbb6d4f7d3e5399c99620d53a3620f890047844d988256937f57f518e8\": container with ID starting with b19589dbb6d4f7d3e5399c99620d53a3620f890047844d988256937f57f518e8 not found: ID does not exist" Feb 16 21:05:37.411389 master-0 kubenswrapper[7926]: I0216 21:05:37.411371 7926 scope.go:117] "RemoveContainer" containerID="df640a25b3ddb3199360ab01328f62e3d346f3e50e79a2d6fa8fbf82c9ea5172" Feb 16 21:05:37.411894 master-0 kubenswrapper[7926]: E0216 21:05:37.411877 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df640a25b3ddb3199360ab01328f62e3d346f3e50e79a2d6fa8fbf82c9ea5172\": container with ID starting with df640a25b3ddb3199360ab01328f62e3d346f3e50e79a2d6fa8fbf82c9ea5172 not found: ID does not exist" containerID="df640a25b3ddb3199360ab01328f62e3d346f3e50e79a2d6fa8fbf82c9ea5172" Feb 16 21:05:37.411980 master-0 kubenswrapper[7926]: I0216 21:05:37.411958 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df640a25b3ddb3199360ab01328f62e3d346f3e50e79a2d6fa8fbf82c9ea5172"} err="failed to get container status \"df640a25b3ddb3199360ab01328f62e3d346f3e50e79a2d6fa8fbf82c9ea5172\": rpc error: code = NotFound desc = could not find container \"df640a25b3ddb3199360ab01328f62e3d346f3e50e79a2d6fa8fbf82c9ea5172\": container with ID starting with df640a25b3ddb3199360ab01328f62e3d346f3e50e79a2d6fa8fbf82c9ea5172 not found: ID does not exist" Feb 16 21:05:37.412053 master-0 kubenswrapper[7926]: I0216 21:05:37.412041 7926 scope.go:117] "RemoveContainer" containerID="17cb30ab353a8c5e6ca279c7628b3d05fccf6b6666e6fe10a816ce650b15966b" Feb 16 21:05:37.422597 master-0 kubenswrapper[7926]: I0216 21:05:37.422530 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhh2p"] Feb 16 21:05:37.425538 master-0 kubenswrapper[7926]: I0216 21:05:37.425353 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dhh2p"] Feb 16 21:05:37.437536 master-0 kubenswrapper[7926]: I0216 21:05:37.437429 7926 scope.go:117] "RemoveContainer" containerID="c2ff8942463d287b82bf327999961ebd9e5c05160f4d3f6df586170d3bfafe1a" Feb 16 21:05:37.455106 master-0 kubenswrapper[7926]: I0216 21:05:37.455021 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:37.460925 master-0 kubenswrapper[7926]: I0216 21:05:37.460384 7926 scope.go:117] "RemoveContainer" containerID="e4407304a8565029141f8bd91a4f0c4e3f383f6d77ed1524d0cd3a581fa9e7f7" Feb 16 21:05:37.605712 master-0 kubenswrapper[7926]: I0216 21:05:37.605519 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b8vtc"] Feb 16 21:05:37.614585 master-0 kubenswrapper[7926]: I0216 21:05:37.614541 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b8vtc"] Feb 16 21:05:37.679393 master-0 kubenswrapper[7926]: I0216 21:05:37.679319 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-blw8x"] Feb 16 21:05:37.684966 master-0 kubenswrapper[7926]: W0216 21:05:37.684887 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod853452fb_1035_4f22_8aeb_9043d150e8ca.slice/crio-89fb595810896fd574764c1b2babfd4babc84a77caf787d5018047df10f3ac86 WatchSource:0}: Error finding container 89fb595810896fd574764c1b2babfd4babc84a77caf787d5018047df10f3ac86: Status 404 returned error can't find the container with id 89fb595810896fd574764c1b2babfd4babc84a77caf787d5018047df10f3ac86 Feb 16 21:05:38.287262 master-0 kubenswrapper[7926]: I0216 21:05:38.287175 7926 generic.go:334] "Generic (PLEG): container finished" podID="853452fb-1035-4f22-8aeb-9043d150e8ca" containerID="8b2a92ef4f9f721811b4bae1b0d025f01e55ec1f259a078142245e8b2ab55dd5" exitCode=0 Feb 16 21:05:38.287262 master-0 kubenswrapper[7926]: I0216 21:05:38.287264 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blw8x" event={"ID":"853452fb-1035-4f22-8aeb-9043d150e8ca","Type":"ContainerDied","Data":"8b2a92ef4f9f721811b4bae1b0d025f01e55ec1f259a078142245e8b2ab55dd5"} Feb 16 21:05:38.288199 master-0 kubenswrapper[7926]: I0216 21:05:38.287299 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blw8x" event={"ID":"853452fb-1035-4f22-8aeb-9043d150e8ca","Type":"ContainerStarted","Data":"89fb595810896fd574764c1b2babfd4babc84a77caf787d5018047df10f3ac86"} Feb 16 21:05:38.292535 master-0 kubenswrapper[7926]: I0216 21:05:38.292478 7926 generic.go:334] "Generic (PLEG): container finished" podID="d8d648c7-b84b-4f43-84c9-903aead0891a" containerID="fa3ed852335cb1ddfb20c47ba698ccaa6874c674cd87c8ada57d89856c7d37fd" exitCode=0 Feb 16 21:05:38.292535 master-0 kubenswrapper[7926]: I0216 21:05:38.292524 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69wj8" event={"ID":"d8d648c7-b84b-4f43-84c9-903aead0891a","Type":"ContainerDied","Data":"fa3ed852335cb1ddfb20c47ba698ccaa6874c674cd87c8ada57d89856c7d37fd"} Feb 16 21:05:38.292810 master-0 kubenswrapper[7926]: I0216 21:05:38.292556 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69wj8" event={"ID":"d8d648c7-b84b-4f43-84c9-903aead0891a","Type":"ContainerStarted","Data":"385456702c716ef5052af7ff4f8c1f6423867ff9037ec0352d3bef2843cc7641"} Feb 16 21:05:38.750425 master-0 kubenswrapper[7926]: I0216 21:05:38.750358 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03593410-baa5-4edb-9d73-242a74f82987" path="/var/lib/kubelet/pods/03593410-baa5-4edb-9d73-242a74f82987/volumes" Feb 16 21:05:38.751364 master-0 kubenswrapper[7926]: I0216 21:05:38.751337 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9566b108-44e1-4d9e-8984-4c396dc4408c" path="/var/lib/kubelet/pods/9566b108-44e1-4d9e-8984-4c396dc4408c/volumes" Feb 16 21:05:39.302472 master-0 kubenswrapper[7926]: I0216 21:05:39.302398 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blw8x" event={"ID":"853452fb-1035-4f22-8aeb-9043d150e8ca","Type":"ContainerStarted","Data":"ffbe844a2ffc7eee14e6cfe4f85b6f3a2d4632e0cd257a400a32c1667a3dc025"} Feb 16 21:05:39.305365 master-0 kubenswrapper[7926]: I0216 21:05:39.305313 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69wj8" event={"ID":"d8d648c7-b84b-4f43-84c9-903aead0891a","Type":"ContainerStarted","Data":"8510067c1b5f7cbc40f7c23faf036a1b9404f3ea036ff9582a8f6c06389e7238"} Feb 16 21:05:40.316678 master-0 kubenswrapper[7926]: I0216 21:05:40.316584 7926 generic.go:334] "Generic (PLEG): container finished" podID="d8d648c7-b84b-4f43-84c9-903aead0891a" containerID="8510067c1b5f7cbc40f7c23faf036a1b9404f3ea036ff9582a8f6c06389e7238" exitCode=0 Feb 16 21:05:40.317411 master-0 kubenswrapper[7926]: I0216 21:05:40.316682 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69wj8" event={"ID":"d8d648c7-b84b-4f43-84c9-903aead0891a","Type":"ContainerDied","Data":"8510067c1b5f7cbc40f7c23faf036a1b9404f3ea036ff9582a8f6c06389e7238"} Feb 16 21:05:40.323345 master-0 kubenswrapper[7926]: I0216 21:05:40.321709 7926 generic.go:334] "Generic (PLEG): container finished" podID="853452fb-1035-4f22-8aeb-9043d150e8ca" containerID="ffbe844a2ffc7eee14e6cfe4f85b6f3a2d4632e0cd257a400a32c1667a3dc025" exitCode=0 Feb 16 21:05:40.323345 master-0 kubenswrapper[7926]: I0216 21:05:40.321757 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blw8x" event={"ID":"853452fb-1035-4f22-8aeb-9043d150e8ca","Type":"ContainerDied","Data":"ffbe844a2ffc7eee14e6cfe4f85b6f3a2d4632e0cd257a400a32c1667a3dc025"} Feb 16 21:05:41.328617 master-0 kubenswrapper[7926]: I0216 21:05:41.328572 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69wj8" event={"ID":"d8d648c7-b84b-4f43-84c9-903aead0891a","Type":"ContainerStarted","Data":"86c3b8b66a0663232311a42e0fdf88ea8134666f5448e623a713c72172a5c7cb"} Feb 16 21:05:41.330463 master-0 kubenswrapper[7926]: I0216 21:05:41.330425 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blw8x" event={"ID":"853452fb-1035-4f22-8aeb-9043d150e8ca","Type":"ContainerStarted","Data":"a8ce4d1d9c38bbcf9596ec468f2d5d035d849fec8079c99788efd1e0bbd3eacd"} Feb 16 21:05:41.362002 master-0 kubenswrapper[7926]: I0216 21:05:41.361945 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-69wj8" podStartSLOduration=2.951878861 podStartE2EDuration="5.361929142s" podCreationTimestamp="2026-02-16 21:05:36 +0000 UTC" firstStartedPulling="2026-02-16 21:05:38.293768384 +0000 UTC m=+509.928668684" lastFinishedPulling="2026-02-16 21:05:40.703818665 +0000 UTC m=+512.338718965" observedRunningTime="2026-02-16 21:05:41.359557495 +0000 UTC m=+512.994457805" watchObservedRunningTime="2026-02-16 21:05:41.361929142 +0000 UTC m=+512.996829442" Feb 16 21:05:41.378154 master-0 kubenswrapper[7926]: I0216 21:05:41.378065 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-blw8x" podStartSLOduration=1.80900774 podStartE2EDuration="4.378044875s" podCreationTimestamp="2026-02-16 21:05:37 +0000 UTC" firstStartedPulling="2026-02-16 21:05:38.288953248 +0000 UTC m=+509.923853598" lastFinishedPulling="2026-02-16 21:05:40.857990433 +0000 UTC m=+512.492890733" observedRunningTime="2026-02-16 21:05:41.375776201 +0000 UTC m=+513.010676541" watchObservedRunningTime="2026-02-16 21:05:41.378044875 +0000 UTC m=+513.012945195" Feb 16 21:05:43.738363 master-0 kubenswrapper[7926]: I0216 21:05:43.738326 7926 scope.go:117] "RemoveContainer" containerID="6a7d7b13e17869969e9d31d79faa72dfb3a8d8453f67a2323e3dc0a1300a1e65" Feb 16 21:05:43.739232 master-0 kubenswrapper[7926]: E0216 21:05:43.739203 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-7485d55966-xzww8_openshift-kube-scheduler-operator(e7adbe32-b8b9-438e-a2e3-f93146a97424)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" podUID="e7adbe32-b8b9-438e-a2e3-f93146a97424" Feb 16 21:05:44.739424 master-0 kubenswrapper[7926]: I0216 21:05:44.739358 7926 scope.go:117] "RemoveContainer" containerID="b805375f7b42f31b0863c18246ff6bd98c4c77aa1ad1eb2b469a42772d48301d" Feb 16 21:05:44.739926 master-0 kubenswrapper[7926]: I0216 21:05:44.739700 7926 scope.go:117] "RemoveContainer" containerID="1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32" Feb 16 21:05:44.739926 master-0 kubenswrapper[7926]: I0216 21:05:44.739835 7926 scope.go:117] "RemoveContainer" containerID="715050d13195531641370ad04c7754b8cef8bb72e0896de25aaafb35a02054c9" Feb 16 21:05:44.740050 master-0 kubenswrapper[7926]: E0216 21:05:44.739997 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-olm-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-olm-operator pod=cluster-olm-operator-55b69c6c48-pdjn4_openshift-cluster-olm-operator(5e062e07-8076-444c-b476-4eb2848e9613)\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" podUID="5e062e07-8076-444c-b476-4eb2848e9613" Feb 16 21:05:44.740050 master-0 kubenswrapper[7926]: E0216 21:05:44.740022 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:05:44.740308 master-0 kubenswrapper[7926]: E0216 21:05:44.740257 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-5dc4688546-q5vjl_openshift-service-ca-operator(2ab0a907-7abe-4808-ba21-bdda1506eae2)\"" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" podUID="2ab0a907-7abe-4808-ba21-bdda1506eae2" Feb 16 21:05:44.740715 master-0 kubenswrapper[7926]: I0216 21:05:44.740690 7926 scope.go:117] "RemoveContainer" containerID="35ed53f7c30fa9921f8cd975c0172c21b8f110abc5d358e84c90a7ea7b1226a7" Feb 16 21:05:44.740966 master-0 kubenswrapper[7926]: E0216 21:05:44.740934 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-7p9ft_openshift-kube-controller-manager-operator(7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" podUID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" Feb 16 21:05:45.739729 master-0 kubenswrapper[7926]: I0216 21:05:45.739033 7926 scope.go:117] "RemoveContainer" containerID="11a0f236b15a97d8bb8db30a3ecfba40559eb738b2fbad78fcc9824a0ec8620e" Feb 16 21:05:45.739729 master-0 kubenswrapper[7926]: I0216 21:05:45.739582 7926 scope.go:117] "RemoveContainer" containerID="08b199e651bbf31337e0e421513ddb4e42db3e1be0a3d07452f74ea9c1f46046" Feb 16 21:05:45.740227 master-0 kubenswrapper[7926]: E0216 21:05:45.739917 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-56v4p_openshift-kube-storage-version-migrator-operator(c7333319-3fe6-4b3f-b600-6b6df49fcaff)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" podUID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" Feb 16 21:05:45.740227 master-0 kubenswrapper[7926]: I0216 21:05:45.740151 7926 scope.go:117] "RemoveContainer" containerID="9aebe89f00ace7757c9f12dc1f4359a915f84e8eb395e1cdeae0962c4475a4af" Feb 16 21:05:45.740613 master-0 kubenswrapper[7926]: E0216 21:05:45.740529 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-cl5ld_openshift-kube-apiserver-operator(0b02b740-5698-4e9a-90fe-2873bd0b0958)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" podUID="0b02b740-5698-4e9a-90fe-2873bd0b0958" Feb 16 21:05:46.362582 master-0 kubenswrapper[7926]: I0216 21:05:46.362545 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-676cd8b9b5-cbj2r_99ab949e-bd0d-45a7-95d1-8381d9f1f5f3/service-ca-controller/1.log" Feb 16 21:05:46.362801 master-0 kubenswrapper[7926]: I0216 21:05:46.362614 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" event={"ID":"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3","Type":"ContainerStarted","Data":"5d2c22738802536774d55c1e4c6c8ed59ce5c575ebb78dadfcb7c71eb7f34d22"} Feb 16 21:05:46.739741 master-0 kubenswrapper[7926]: I0216 21:05:46.739596 7926 scope.go:117] "RemoveContainer" containerID="abce7c467580f27265b653bd89f53e6e0d6413f3687b039b9f58c8dd18d3f0ce" Feb 16 21:05:46.740173 master-0 kubenswrapper[7926]: I0216 21:05:46.739742 7926 scope.go:117] "RemoveContainer" containerID="98437a21e834f809a7d3a2fcc7ab7ac439c7d9370d526734b7d11f63840cb92d" Feb 16 21:05:46.740173 master-0 kubenswrapper[7926]: E0216 21:05:46.739946 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-5f5f84757d-k42w9_openshift-controller-manager-operator(695549c8-d1fc-429d-9c9f-0a5915dc6074)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" podUID="695549c8-d1fc-429d-9c9f-0a5915dc6074" Feb 16 21:05:46.740173 master-0 kubenswrapper[7926]: E0216 21:05:46.739964 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=network-operator pod=network-operator-6fcf4c966-n4hfs_openshift-network-operator(1b61063e-775e-421d-bf73-a6ef134293a0)\"" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" podUID="1b61063e-775e-421d-bf73-a6ef134293a0" Feb 16 21:05:46.897134 master-0 kubenswrapper[7926]: I0216 21:05:46.897078 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:46.897134 master-0 kubenswrapper[7926]: I0216 21:05:46.897137 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:47.456297 master-0 kubenswrapper[7926]: I0216 21:05:47.456230 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:47.456297 master-0 kubenswrapper[7926]: I0216 21:05:47.456299 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:47.494480 master-0 kubenswrapper[7926]: I0216 21:05:47.494408 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:47.738767 master-0 kubenswrapper[7926]: I0216 21:05:47.738447 7926 scope.go:117] "RemoveContainer" containerID="5bb447e9b562fe2a3fcb45b723cffb38257ea64157f142954fe58414909efdd3" Feb 16 21:05:47.738983 master-0 kubenswrapper[7926]: I0216 21:05:47.738944 7926 scope.go:117] "RemoveContainer" containerID="8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954" Feb 16 21:05:47.739083 master-0 kubenswrapper[7926]: I0216 21:05:47.739022 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:05:47.740011 master-0 kubenswrapper[7926]: I0216 21:05:47.739189 7926 scope.go:117] "RemoveContainer" containerID="86b2625e01e86e20ad843cc517b662e8d0574773dfe24c22fbbf50abc8c0ea7f" Feb 16 21:05:47.740011 master-0 kubenswrapper[7926]: E0216 21:05:47.739322 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:05:47.740011 master-0 kubenswrapper[7926]: E0216 21:05:47.739396 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:05:47.740011 master-0 kubenswrapper[7926]: I0216 21:05:47.739410 7926 scope.go:117] "RemoveContainer" containerID="d0e5f8a907c4851af3bce655e141083b0f633fdfa41c5abacbb48a7df33f9e94" Feb 16 21:05:47.740011 master-0 kubenswrapper[7926]: E0216 21:05:47.739560 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-storage-operator pod=cluster-storage-operator-75b869db96-g4w5m_openshift-cluster-storage-operator(aa2e9bbc-3962-45f5-a7cc-2dc059409e70)\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" podUID="aa2e9bbc-3962-45f5-a7cc-2dc059409e70" Feb 16 21:05:47.740011 master-0 kubenswrapper[7926]: E0216 21:05:47.739870 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-6d4655d9cf-tvzdw_openshift-apiserver-operator(6b6be6de-6fcc-4f57-b163-fe8f970a01a4)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" podUID="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" Feb 16 21:05:47.942733 master-0 kubenswrapper[7926]: I0216 21:05:47.942636 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-69wj8" podUID="d8d648c7-b84b-4f43-84c9-903aead0891a" containerName="registry-server" probeResult="failure" output=< Feb 16 21:05:47.942733 master-0 kubenswrapper[7926]: timeout: failed to connect service ":50051" within 1s Feb 16 21:05:47.942733 master-0 kubenswrapper[7926]: > Feb 16 21:05:48.377004 master-0 kubenswrapper[7926]: I0216 21:05:48.376902 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-v7xdv_4085413c-9af1-4d2a-ba0f-33b42025cb7f/csi-snapshot-controller-operator/2.log" Feb 16 21:05:48.378009 master-0 kubenswrapper[7926]: I0216 21:05:48.377959 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" event={"ID":"4085413c-9af1-4d2a-ba0f-33b42025cb7f","Type":"ContainerStarted","Data":"a7f8b5655aa5f928db7106989ad4301d85bb293edb63d14ebb1059dcd9ca8910"} Feb 16 21:05:48.433906 master-0 kubenswrapper[7926]: I0216 21:05:48.433831 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:05:56.739306 master-0 kubenswrapper[7926]: I0216 21:05:56.739226 7926 scope.go:117] "RemoveContainer" containerID="35ed53f7c30fa9921f8cd975c0172c21b8f110abc5d358e84c90a7ea7b1226a7" Feb 16 21:05:56.740497 master-0 kubenswrapper[7926]: I0216 21:05:56.739496 7926 scope.go:117] "RemoveContainer" containerID="6a7d7b13e17869969e9d31d79faa72dfb3a8d8453f67a2323e3dc0a1300a1e65" Feb 16 21:05:56.740497 master-0 kubenswrapper[7926]: E0216 21:05:56.739989 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-7p9ft_openshift-kube-controller-manager-operator(7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" podUID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" Feb 16 21:05:56.932679 master-0 kubenswrapper[7926]: I0216 21:05:56.932574 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:56.992565 master-0 kubenswrapper[7926]: I0216 21:05:56.992443 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:05:57.435580 master-0 kubenswrapper[7926]: I0216 21:05:57.435529 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-xzww8_e7adbe32-b8b9-438e-a2e3-f93146a97424/kube-scheduler-operator-container/3.log" Feb 16 21:05:57.435899 master-0 kubenswrapper[7926]: I0216 21:05:57.435629 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" event={"ID":"e7adbe32-b8b9-438e-a2e3-f93146a97424","Type":"ContainerStarted","Data":"47333b4dc4c4506a75d09ea9dbae2fc9aaa9a5e9656c7290cd679c62408950cd"} Feb 16 21:05:57.739106 master-0 kubenswrapper[7926]: I0216 21:05:57.738915 7926 scope.go:117] "RemoveContainer" containerID="abce7c467580f27265b653bd89f53e6e0d6413f3687b039b9f58c8dd18d3f0ce" Feb 16 21:05:57.739377 master-0 kubenswrapper[7926]: I0216 21:05:57.739155 7926 scope.go:117] "RemoveContainer" containerID="715050d13195531641370ad04c7754b8cef8bb72e0896de25aaafb35a02054c9" Feb 16 21:05:57.740238 master-0 kubenswrapper[7926]: E0216 21:05:57.739367 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-5f5f84757d-k42w9_openshift-controller-manager-operator(695549c8-d1fc-429d-9c9f-0a5915dc6074)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" podUID="695549c8-d1fc-429d-9c9f-0a5915dc6074" Feb 16 21:05:58.443599 master-0 kubenswrapper[7926]: I0216 21:05:58.443533 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-q5vjl_2ab0a907-7abe-4808-ba21-bdda1506eae2/service-ca-operator/3.log" Feb 16 21:05:58.443847 master-0 kubenswrapper[7926]: I0216 21:05:58.443642 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" event={"ID":"2ab0a907-7abe-4808-ba21-bdda1506eae2","Type":"ContainerStarted","Data":"957672c63eb7430bfeb7424cf0d3c859bba34c6e865fdeff7ddd7689e1cdc21a"} Feb 16 21:05:58.744381 master-0 kubenswrapper[7926]: I0216 21:05:58.744232 7926 scope.go:117] "RemoveContainer" containerID="b805375f7b42f31b0863c18246ff6bd98c4c77aa1ad1eb2b469a42772d48301d" Feb 16 21:05:58.744381 master-0 kubenswrapper[7926]: I0216 21:05:58.744349 7926 scope.go:117] "RemoveContainer" containerID="98437a21e834f809a7d3a2fcc7ab7ac439c7d9370d526734b7d11f63840cb92d" Feb 16 21:05:58.745376 master-0 kubenswrapper[7926]: I0216 21:05:58.744442 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:05:58.745376 master-0 kubenswrapper[7926]: E0216 21:05:58.744844 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=network-operator pod=network-operator-6fcf4c966-n4hfs_openshift-network-operator(1b61063e-775e-421d-bf73-a6ef134293a0)\"" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" podUID="1b61063e-775e-421d-bf73-a6ef134293a0" Feb 16 21:05:59.451067 master-0 kubenswrapper[7926]: I0216 21:05:59.450953 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-pdjn4_5e062e07-8076-444c-b476-4eb2848e9613/cluster-olm-operator/2.log" Feb 16 21:05:59.452209 master-0 kubenswrapper[7926]: I0216 21:05:59.452165 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerStarted","Data":"75c97c8fc1fe4bc7ed998eb0ff8eb423dc36feffc10982a1abea2a451f308726"} Feb 16 21:05:59.455397 master-0 kubenswrapper[7926]: I0216 21:05:59.455360 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557"} Feb 16 21:05:59.739789 master-0 kubenswrapper[7926]: I0216 21:05:59.738913 7926 scope.go:117] "RemoveContainer" containerID="08b199e651bbf31337e0e421513ddb4e42db3e1be0a3d07452f74ea9c1f46046" Feb 16 21:05:59.739789 master-0 kubenswrapper[7926]: E0216 21:05:59.739177 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-56v4p_openshift-kube-storage-version-migrator-operator(c7333319-3fe6-4b3f-b600-6b6df49fcaff)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" podUID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" Feb 16 21:05:59.739789 master-0 kubenswrapper[7926]: I0216 21:05:59.739182 7926 scope.go:117] "RemoveContainer" containerID="1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32" Feb 16 21:05:59.739789 master-0 kubenswrapper[7926]: E0216 21:05:59.739377 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:06:00.738549 master-0 kubenswrapper[7926]: I0216 21:06:00.738464 7926 scope.go:117] "RemoveContainer" containerID="9aebe89f00ace7757c9f12dc1f4359a915f84e8eb395e1cdeae0962c4475a4af" Feb 16 21:06:00.739353 master-0 kubenswrapper[7926]: E0216 21:06:00.738727 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-cl5ld_openshift-kube-apiserver-operator(0b02b740-5698-4e9a-90fe-2873bd0b0958)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" podUID="0b02b740-5698-4e9a-90fe-2873bd0b0958" Feb 16 21:06:01.739458 master-0 kubenswrapper[7926]: I0216 21:06:01.739341 7926 scope.go:117] "RemoveContainer" containerID="d0e5f8a907c4851af3bce655e141083b0f633fdfa41c5abacbb48a7df33f9e94" Feb 16 21:06:02.475939 master-0 kubenswrapper[7926]: I0216 21:06:02.475861 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-tvzdw_6b6be6de-6fcc-4f57-b163-fe8f970a01a4/openshift-apiserver-operator/3.log" Feb 16 21:06:02.475939 master-0 kubenswrapper[7926]: I0216 21:06:02.475916 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" event={"ID":"6b6be6de-6fcc-4f57-b163-fe8f970a01a4","Type":"ContainerStarted","Data":"c2a09a3b4592efd5c3950579bb4aaa5d970beb72eb354639340f9f2327450863"} Feb 16 21:06:02.738912 master-0 kubenswrapper[7926]: I0216 21:06:02.738731 7926 scope.go:117] "RemoveContainer" containerID="8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954" Feb 16 21:06:02.739136 master-0 kubenswrapper[7926]: E0216 21:06:02.739019 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:06:02.739136 master-0 kubenswrapper[7926]: I0216 21:06:02.739022 7926 scope.go:117] "RemoveContainer" containerID="86b2625e01e86e20ad843cc517b662e8d0574773dfe24c22fbbf50abc8c0ea7f" Feb 16 21:06:03.483379 master-0 kubenswrapper[7926]: I0216 21:06:03.483274 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/2.log" Feb 16 21:06:03.483379 master-0 kubenswrapper[7926]: I0216 21:06:03.483326 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" event={"ID":"aa2e9bbc-3962-45f5-a7cc-2dc059409e70","Type":"ContainerStarted","Data":"e121208e065bd981ec8f120b4bddfef2011a7578aefea2e29754d83b50431d3d"} Feb 16 21:06:05.980305 master-0 kubenswrapper[7926]: I0216 21:06:05.980197 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:06:06.710558 master-0 kubenswrapper[7926]: I0216 21:06:06.710416 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:06:06.714885 master-0 kubenswrapper[7926]: I0216 21:06:06.714846 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:06:07.739194 master-0 kubenswrapper[7926]: I0216 21:06:07.739112 7926 scope.go:117] "RemoveContainer" containerID="35ed53f7c30fa9921f8cd975c0172c21b8f110abc5d358e84c90a7ea7b1226a7" Feb 16 21:06:07.740200 master-0 kubenswrapper[7926]: E0216 21:06:07.739570 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-7p9ft_openshift-kube-controller-manager-operator(7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" podUID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" Feb 16 21:06:08.743864 master-0 kubenswrapper[7926]: I0216 21:06:08.743757 7926 scope.go:117] "RemoveContainer" containerID="abce7c467580f27265b653bd89f53e6e0d6413f3687b039b9f58c8dd18d3f0ce" Feb 16 21:06:08.744922 master-0 kubenswrapper[7926]: E0216 21:06:08.744133 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-5f5f84757d-k42w9_openshift-controller-manager-operator(695549c8-d1fc-429d-9c9f-0a5915dc6074)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" podUID="695549c8-d1fc-429d-9c9f-0a5915dc6074" Feb 16 21:06:11.738980 master-0 kubenswrapper[7926]: I0216 21:06:11.738909 7926 scope.go:117] "RemoveContainer" containerID="9aebe89f00ace7757c9f12dc1f4359a915f84e8eb395e1cdeae0962c4475a4af" Feb 16 21:06:11.739994 master-0 kubenswrapper[7926]: E0216 21:06:11.739263 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-cl5ld_openshift-kube-apiserver-operator(0b02b740-5698-4e9a-90fe-2873bd0b0958)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" podUID="0b02b740-5698-4e9a-90fe-2873bd0b0958" Feb 16 21:06:12.739253 master-0 kubenswrapper[7926]: I0216 21:06:12.739167 7926 scope.go:117] "RemoveContainer" containerID="1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32" Feb 16 21:06:12.740105 master-0 kubenswrapper[7926]: E0216 21:06:12.739459 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:06:13.739229 master-0 kubenswrapper[7926]: I0216 21:06:13.739160 7926 scope.go:117] "RemoveContainer" containerID="98437a21e834f809a7d3a2fcc7ab7ac439c7d9370d526734b7d11f63840cb92d" Feb 16 21:06:14.572168 master-0 kubenswrapper[7926]: I0216 21:06:14.571250 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/3.log" Feb 16 21:06:14.572168 master-0 kubenswrapper[7926]: I0216 21:06:14.571339 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerStarted","Data":"aab44606d671f216ff3793ef915c84f815301082904e4bc4a12b70d23d7c13c3"} Feb 16 21:06:14.738539 master-0 kubenswrapper[7926]: I0216 21:06:14.738426 7926 scope.go:117] "RemoveContainer" containerID="08b199e651bbf31337e0e421513ddb4e42db3e1be0a3d07452f74ea9c1f46046" Feb 16 21:06:15.580532 master-0 kubenswrapper[7926]: I0216 21:06:15.580421 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/4.log" Feb 16 21:06:15.580532 master-0 kubenswrapper[7926]: I0216 21:06:15.580497 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerStarted","Data":"220f76e0bb64fd419313cb573cd97bbb54f9d2b5998f9525c7d9045abc13cfb5"} Feb 16 21:06:15.738243 master-0 kubenswrapper[7926]: I0216 21:06:15.738137 7926 scope.go:117] "RemoveContainer" containerID="8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954" Feb 16 21:06:15.738696 master-0 kubenswrapper[7926]: E0216 21:06:15.738349 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:06:15.988249 master-0 kubenswrapper[7926]: I0216 21:06:15.987980 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:06:19.455630 master-0 kubenswrapper[7926]: I0216 21:06:19.455357 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jb6tl"] Feb 16 21:06:19.456722 master-0 kubenswrapper[7926]: E0216 21:06:19.455682 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03593410-baa5-4edb-9d73-242a74f82987" containerName="extract-utilities" Feb 16 21:06:19.456722 master-0 kubenswrapper[7926]: I0216 21:06:19.455700 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="03593410-baa5-4edb-9d73-242a74f82987" containerName="extract-utilities" Feb 16 21:06:19.456722 master-0 kubenswrapper[7926]: E0216 21:06:19.455727 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03593410-baa5-4edb-9d73-242a74f82987" containerName="registry-server" Feb 16 21:06:19.456722 master-0 kubenswrapper[7926]: I0216 21:06:19.455737 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="03593410-baa5-4edb-9d73-242a74f82987" containerName="registry-server" Feb 16 21:06:19.456722 master-0 kubenswrapper[7926]: E0216 21:06:19.455750 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03593410-baa5-4edb-9d73-242a74f82987" containerName="extract-content" Feb 16 21:06:19.456722 master-0 kubenswrapper[7926]: I0216 21:06:19.455758 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="03593410-baa5-4edb-9d73-242a74f82987" containerName="extract-content" Feb 16 21:06:19.456722 master-0 kubenswrapper[7926]: I0216 21:06:19.455904 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="03593410-baa5-4edb-9d73-242a74f82987" containerName="registry-server" Feb 16 21:06:19.456722 master-0 kubenswrapper[7926]: I0216 21:06:19.456627 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.460017 master-0 kubenswrapper[7926]: I0216 21:06:19.459981 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 21:06:19.461107 master-0 kubenswrapper[7926]: I0216 21:06:19.460540 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-457l2" Feb 16 21:06:19.631899 master-0 kubenswrapper[7926]: I0216 21:06:19.631544 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88c9d2fb-763f-4405-8d1a-c39039b41d3b-proxy-tls\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.631899 master-0 kubenswrapper[7926]: I0216 21:06:19.631622 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88c9d2fb-763f-4405-8d1a-c39039b41d3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.631899 master-0 kubenswrapper[7926]: I0216 21:06:19.631640 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qcq9\" (UniqueName: \"kubernetes.io/projected/88c9d2fb-763f-4405-8d1a-c39039b41d3b-kube-api-access-8qcq9\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.631899 master-0 kubenswrapper[7926]: I0216 21:06:19.631673 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/88c9d2fb-763f-4405-8d1a-c39039b41d3b-rootfs\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.733285 master-0 kubenswrapper[7926]: I0216 21:06:19.733114 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88c9d2fb-763f-4405-8d1a-c39039b41d3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.733285 master-0 kubenswrapper[7926]: I0216 21:06:19.733165 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qcq9\" (UniqueName: \"kubernetes.io/projected/88c9d2fb-763f-4405-8d1a-c39039b41d3b-kube-api-access-8qcq9\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.733285 master-0 kubenswrapper[7926]: I0216 21:06:19.733192 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/88c9d2fb-763f-4405-8d1a-c39039b41d3b-rootfs\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.733285 master-0 kubenswrapper[7926]: I0216 21:06:19.733241 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88c9d2fb-763f-4405-8d1a-c39039b41d3b-proxy-tls\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.734250 master-0 kubenswrapper[7926]: I0216 21:06:19.733492 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/88c9d2fb-763f-4405-8d1a-c39039b41d3b-rootfs\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.734250 master-0 kubenswrapper[7926]: I0216 21:06:19.734060 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88c9d2fb-763f-4405-8d1a-c39039b41d3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.736494 master-0 kubenswrapper[7926]: I0216 21:06:19.736430 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88c9d2fb-763f-4405-8d1a-c39039b41d3b-proxy-tls\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.738330 master-0 kubenswrapper[7926]: I0216 21:06:19.738302 7926 scope.go:117] "RemoveContainer" containerID="35ed53f7c30fa9921f8cd975c0172c21b8f110abc5d358e84c90a7ea7b1226a7" Feb 16 21:06:19.747205 master-0 kubenswrapper[7926]: I0216 21:06:19.747132 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qcq9\" (UniqueName: \"kubernetes.io/projected/88c9d2fb-763f-4405-8d1a-c39039b41d3b-kube-api-access-8qcq9\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.782367 master-0 kubenswrapper[7926]: I0216 21:06:19.781932 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:06:19.812088 master-0 kubenswrapper[7926]: W0216 21:06:19.811688 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c9d2fb_763f_4405_8d1a_c39039b41d3b.slice/crio-acec58956615bf5fc5d4c728869e591e541d368aa9b045c7975cb5d8c938ff55 WatchSource:0}: Error finding container acec58956615bf5fc5d4c728869e591e541d368aa9b045c7975cb5d8c938ff55: Status 404 returned error can't find the container with id acec58956615bf5fc5d4c728869e591e541d368aa9b045c7975cb5d8c938ff55 Feb 16 21:06:20.620381 master-0 kubenswrapper[7926]: I0216 21:06:20.620342 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/4.log" Feb 16 21:06:20.620869 master-0 kubenswrapper[7926]: I0216 21:06:20.620465 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerStarted","Data":"4b9eed56cd9de27df8732f0bf589198f3bec398bab1ee5d8d5d4047198bdc2b3"} Feb 16 21:06:20.622104 master-0 kubenswrapper[7926]: I0216 21:06:20.622062 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" event={"ID":"88c9d2fb-763f-4405-8d1a-c39039b41d3b","Type":"ContainerStarted","Data":"356615340d1fa734068744b665275fc799de6e0bdf17935887ae6dfbf7e33582"} Feb 16 21:06:20.622200 master-0 kubenswrapper[7926]: I0216 21:06:20.622119 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" event={"ID":"88c9d2fb-763f-4405-8d1a-c39039b41d3b","Type":"ContainerStarted","Data":"63dcd78e336e54b7c9dc9ab869c711c8a78fc93da330b9932ed7c66703f025a1"} Feb 16 21:06:20.622200 master-0 kubenswrapper[7926]: I0216 21:06:20.622134 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" event={"ID":"88c9d2fb-763f-4405-8d1a-c39039b41d3b","Type":"ContainerStarted","Data":"acec58956615bf5fc5d4c728869e591e541d368aa9b045c7975cb5d8c938ff55"} Feb 16 21:06:20.658678 master-0 kubenswrapper[7926]: I0216 21:06:20.656428 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" podStartSLOduration=1.656409164 podStartE2EDuration="1.656409164s" podCreationTimestamp="2026-02-16 21:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:06:20.654968295 +0000 UTC m=+552.289868605" watchObservedRunningTime="2026-02-16 21:06:20.656409164 +0000 UTC m=+552.291309474" Feb 16 21:06:21.738802 master-0 kubenswrapper[7926]: I0216 21:06:21.738734 7926 scope.go:117] "RemoveContainer" containerID="abce7c467580f27265b653bd89f53e6e0d6413f3687b039b9f58c8dd18d3f0ce" Feb 16 21:06:22.641351 master-0 kubenswrapper[7926]: I0216 21:06:22.641245 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/4.log" Feb 16 21:06:22.641798 master-0 kubenswrapper[7926]: I0216 21:06:22.641360 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerStarted","Data":"b759be244b2ba22ad1884f9e0274ee8a722d66b1e8a5b2b9389cb48c9ae341b5"} Feb 16 21:06:23.620034 master-0 kubenswrapper[7926]: I0216 21:06:23.619903 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4"] Feb 16 21:06:23.621551 master-0 kubenswrapper[7926]: I0216 21:06:23.621488 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:06:23.624138 master-0 kubenswrapper[7926]: I0216 21:06:23.624053 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 21:06:23.624488 master-0 kubenswrapper[7926]: I0216 21:06:23.624407 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-2t7md" Feb 16 21:06:23.647821 master-0 kubenswrapper[7926]: I0216 21:06:23.647711 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4"] Feb 16 21:06:23.796813 master-0 kubenswrapper[7926]: I0216 21:06:23.796642 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vcsp\" (UniqueName: \"kubernetes.io/projected/fb1eac23-18a5-4706-adcd-81d83e04cd12-kube-api-access-8vcsp\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:06:23.797296 master-0 kubenswrapper[7926]: I0216 21:06:23.797206 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb1eac23-18a5-4706-adcd-81d83e04cd12-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:06:23.797461 master-0 kubenswrapper[7926]: I0216 21:06:23.797407 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb1eac23-18a5-4706-adcd-81d83e04cd12-proxy-tls\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:06:23.899474 master-0 kubenswrapper[7926]: I0216 21:06:23.899252 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb1eac23-18a5-4706-adcd-81d83e04cd12-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:06:23.899474 master-0 kubenswrapper[7926]: I0216 21:06:23.899320 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb1eac23-18a5-4706-adcd-81d83e04cd12-proxy-tls\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:06:23.899899 master-0 kubenswrapper[7926]: I0216 21:06:23.899716 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vcsp\" (UniqueName: \"kubernetes.io/projected/fb1eac23-18a5-4706-adcd-81d83e04cd12-kube-api-access-8vcsp\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:06:23.900561 master-0 kubenswrapper[7926]: I0216 21:06:23.900483 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb1eac23-18a5-4706-adcd-81d83e04cd12-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:06:23.904936 master-0 kubenswrapper[7926]: I0216 21:06:23.904883 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb1eac23-18a5-4706-adcd-81d83e04cd12-proxy-tls\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:06:23.923643 master-0 kubenswrapper[7926]: I0216 21:06:23.923554 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vcsp\" (UniqueName: \"kubernetes.io/projected/fb1eac23-18a5-4706-adcd-81d83e04cd12-kube-api-access-8vcsp\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:06:23.993910 master-0 kubenswrapper[7926]: I0216 21:06:23.993794 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:06:24.499322 master-0 kubenswrapper[7926]: I0216 21:06:24.499243 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4"] Feb 16 21:06:24.505096 master-0 kubenswrapper[7926]: W0216 21:06:24.504939 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb1eac23_18a5_4706_adcd_81d83e04cd12.slice/crio-6caed68f3fc79ebb1ed9e5bfd3e9f6a4bad90b8a5cdeab5884b6fd52a2305c16 WatchSource:0}: Error finding container 6caed68f3fc79ebb1ed9e5bfd3e9f6a4bad90b8a5cdeab5884b6fd52a2305c16: Status 404 returned error can't find the container with id 6caed68f3fc79ebb1ed9e5bfd3e9f6a4bad90b8a5cdeab5884b6fd52a2305c16 Feb 16 21:06:24.657042 master-0 kubenswrapper[7926]: I0216 21:06:24.656994 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" event={"ID":"fb1eac23-18a5-4706-adcd-81d83e04cd12","Type":"ContainerStarted","Data":"6caed68f3fc79ebb1ed9e5bfd3e9f6a4bad90b8a5cdeab5884b6fd52a2305c16"} Feb 16 21:06:24.822099 master-0 kubenswrapper[7926]: I0216 21:06:24.821730 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m"] Feb 16 21:06:24.822598 master-0 kubenswrapper[7926]: I0216 21:06:24.822574 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" Feb 16 21:06:24.823075 master-0 kubenswrapper[7926]: I0216 21:06:24.823027 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d"] Feb 16 21:06:24.824155 master-0 kubenswrapper[7926]: I0216 21:06:24.824114 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:24.824485 master-0 kubenswrapper[7926]: I0216 21:06:24.824460 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 21:06:24.825430 master-0 kubenswrapper[7926]: I0216 21:06:24.825398 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-r6wp5" Feb 16 21:06:24.826459 master-0 kubenswrapper[7926]: I0216 21:06:24.826397 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:06:24.826936 master-0 kubenswrapper[7926]: I0216 21:06:24.826899 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw"] Feb 16 21:06:24.827802 master-0 kubenswrapper[7926]: I0216 21:06:24.827733 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" Feb 16 21:06:24.829636 master-0 kubenswrapper[7926]: I0216 21:06:24.829589 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-864ddd5f56-z4bnk"] Feb 16 21:06:24.830213 master-0 kubenswrapper[7926]: I0216 21:06:24.830182 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:24.832155 master-0 kubenswrapper[7926]: I0216 21:06:24.832112 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 21:06:24.832413 master-0 kubenswrapper[7926]: I0216 21:06:24.832376 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 21:06:24.832624 master-0 kubenswrapper[7926]: I0216 21:06:24.832592 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 21:06:24.833062 master-0 kubenswrapper[7926]: I0216 21:06:24.833034 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 21:06:24.833316 master-0 kubenswrapper[7926]: I0216 21:06:24.833288 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 21:06:24.837249 master-0 kubenswrapper[7926]: I0216 21:06:24.837175 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 21:06:24.841365 master-0 kubenswrapper[7926]: I0216 21:06:24.841310 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw"] Feb 16 21:06:24.845368 master-0 kubenswrapper[7926]: I0216 21:06:24.845327 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d"] Feb 16 21:06:24.848915 master-0 kubenswrapper[7926]: I0216 21:06:24.848871 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m"] Feb 16 21:06:24.917322 master-0 kubenswrapper[7926]: I0216 21:06:24.916942 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-default-certificate\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:24.917322 master-0 kubenswrapper[7926]: I0216 21:06:24.917059 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxhfs\" (UniqueName: \"kubernetes.io/projected/3403d2bf-b093-4f2e-80aa-73a3d6bcaffb-kube-api-access-gxhfs\") pod \"network-check-source-7d8f4c8c66-w6tqw\" (UID: \"3403d2bf-b093-4f2e-80aa-73a3d6bcaffb\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" Feb 16 21:06:24.917322 master-0 kubenswrapper[7926]: I0216 21:06:24.917135 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/da07cd48-b1e8-4ccc-b980-84702cedb042-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-hsz6m\" (UID: \"da07cd48-b1e8-4ccc-b980-84702cedb042\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" Feb 16 21:06:24.917570 master-0 kubenswrapper[7926]: I0216 21:06:24.917357 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-metrics-certs\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:24.917570 master-0 kubenswrapper[7926]: I0216 21:06:24.917411 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgcn\" (UniqueName: \"kubernetes.io/projected/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-kube-api-access-7xgcn\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:24.917570 master-0 kubenswrapper[7926]: I0216 21:06:24.917465 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh9dv\" (UniqueName: \"kubernetes.io/projected/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-kube-api-access-dh9dv\") pod \"collect-profiles-29521260-fx98d\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:24.917570 master-0 kubenswrapper[7926]: I0216 21:06:24.917537 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-service-ca-bundle\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:24.917711 master-0 kubenswrapper[7926]: I0216 21:06:24.917630 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-config-volume\") pod \"collect-profiles-29521260-fx98d\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:24.917711 master-0 kubenswrapper[7926]: I0216 21:06:24.917686 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-secret-volume\") pod \"collect-profiles-29521260-fx98d\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:24.917771 master-0 kubenswrapper[7926]: I0216 21:06:24.917721 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-stats-auth\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.019487 master-0 kubenswrapper[7926]: I0216 21:06:25.019386 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-config-volume\") pod \"collect-profiles-29521260-fx98d\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:25.019819 master-0 kubenswrapper[7926]: I0216 21:06:25.019793 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-secret-volume\") pod \"collect-profiles-29521260-fx98d\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:25.020091 master-0 kubenswrapper[7926]: I0216 21:06:25.020034 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-stats-auth\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.020361 master-0 kubenswrapper[7926]: I0216 21:06:25.020339 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-default-certificate\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.020497 master-0 kubenswrapper[7926]: I0216 21:06:25.020476 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxhfs\" (UniqueName: \"kubernetes.io/projected/3403d2bf-b093-4f2e-80aa-73a3d6bcaffb-kube-api-access-gxhfs\") pod \"network-check-source-7d8f4c8c66-w6tqw\" (UID: \"3403d2bf-b093-4f2e-80aa-73a3d6bcaffb\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" Feb 16 21:06:25.020618 master-0 kubenswrapper[7926]: I0216 21:06:25.020599 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/da07cd48-b1e8-4ccc-b980-84702cedb042-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-hsz6m\" (UID: \"da07cd48-b1e8-4ccc-b980-84702cedb042\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" Feb 16 21:06:25.020804 master-0 kubenswrapper[7926]: I0216 21:06:25.020781 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-metrics-certs\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.020943 master-0 kubenswrapper[7926]: I0216 21:06:25.020531 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-config-volume\") pod \"collect-profiles-29521260-fx98d\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:25.021021 master-0 kubenswrapper[7926]: I0216 21:06:25.020924 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xgcn\" (UniqueName: \"kubernetes.io/projected/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-kube-api-access-7xgcn\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.021148 master-0 kubenswrapper[7926]: I0216 21:06:25.021130 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh9dv\" (UniqueName: \"kubernetes.io/projected/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-kube-api-access-dh9dv\") pod \"collect-profiles-29521260-fx98d\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:25.021336 master-0 kubenswrapper[7926]: I0216 21:06:25.021317 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-service-ca-bundle\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.022131 master-0 kubenswrapper[7926]: I0216 21:06:25.022089 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-service-ca-bundle\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.023249 master-0 kubenswrapper[7926]: I0216 21:06:25.023036 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-secret-volume\") pod \"collect-profiles-29521260-fx98d\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:25.023726 master-0 kubenswrapper[7926]: I0216 21:06:25.023689 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-default-certificate\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.023909 master-0 kubenswrapper[7926]: I0216 21:06:25.023853 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-stats-auth\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.024119 master-0 kubenswrapper[7926]: I0216 21:06:25.024091 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/da07cd48-b1e8-4ccc-b980-84702cedb042-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-hsz6m\" (UID: \"da07cd48-b1e8-4ccc-b980-84702cedb042\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" Feb 16 21:06:25.024997 master-0 kubenswrapper[7926]: I0216 21:06:25.024641 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-metrics-certs\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.045383 master-0 kubenswrapper[7926]: I0216 21:06:25.045338 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh9dv\" (UniqueName: \"kubernetes.io/projected/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-kube-api-access-dh9dv\") pod \"collect-profiles-29521260-fx98d\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:25.045635 master-0 kubenswrapper[7926]: I0216 21:06:25.045585 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxhfs\" (UniqueName: \"kubernetes.io/projected/3403d2bf-b093-4f2e-80aa-73a3d6bcaffb-kube-api-access-gxhfs\") pod \"network-check-source-7d8f4c8c66-w6tqw\" (UID: \"3403d2bf-b093-4f2e-80aa-73a3d6bcaffb\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" Feb 16 21:06:25.050716 master-0 kubenswrapper[7926]: I0216 21:06:25.050634 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xgcn\" (UniqueName: \"kubernetes.io/projected/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-kube-api-access-7xgcn\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.153896 master-0 kubenswrapper[7926]: I0216 21:06:25.153782 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" Feb 16 21:06:25.182241 master-0 kubenswrapper[7926]: I0216 21:06:25.182108 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:25.216275 master-0 kubenswrapper[7926]: W0216 21:06:25.215833 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7eddb51_cb37_4dd5_9c24_64cc4ae2e6ee.slice/crio-1d4599582332a100db8555ba006867716892ce1ecdd5b2f904cbee81575c2c2d WatchSource:0}: Error finding container 1d4599582332a100db8555ba006867716892ce1ecdd5b2f904cbee81575c2c2d: Status 404 returned error can't find the container with id 1d4599582332a100db8555ba006867716892ce1ecdd5b2f904cbee81575c2c2d Feb 16 21:06:25.228878 master-0 kubenswrapper[7926]: I0216 21:06:25.228831 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:25.248944 master-0 kubenswrapper[7926]: I0216 21:06:25.245183 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" Feb 16 21:06:25.496812 master-0 kubenswrapper[7926]: I0216 21:06:25.496739 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m"] Feb 16 21:06:25.504378 master-0 kubenswrapper[7926]: W0216 21:06:25.504248 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda07cd48_b1e8_4ccc_b980_84702cedb042.slice/crio-1befa239880012918c5014596ebf2ea1e19a17105f1c62212a86bd3326b1986f WatchSource:0}: Error finding container 1befa239880012918c5014596ebf2ea1e19a17105f1c62212a86bd3326b1986f: Status 404 returned error can't find the container with id 1befa239880012918c5014596ebf2ea1e19a17105f1c62212a86bd3326b1986f Feb 16 21:06:25.662586 master-0 kubenswrapper[7926]: I0216 21:06:25.662361 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerStarted","Data":"1d4599582332a100db8555ba006867716892ce1ecdd5b2f904cbee81575c2c2d"} Feb 16 21:06:25.663980 master-0 kubenswrapper[7926]: I0216 21:06:25.663906 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" event={"ID":"da07cd48-b1e8-4ccc-b980-84702cedb042","Type":"ContainerStarted","Data":"1befa239880012918c5014596ebf2ea1e19a17105f1c62212a86bd3326b1986f"} Feb 16 21:06:25.666436 master-0 kubenswrapper[7926]: I0216 21:06:25.666362 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" event={"ID":"fb1eac23-18a5-4706-adcd-81d83e04cd12","Type":"ContainerStarted","Data":"8fffe565463ba118729e6d7e82e27ca24bae5e89a802ccdfc1edf0108bcb41ce"} Feb 16 21:06:25.666436 master-0 kubenswrapper[7926]: I0216 21:06:25.666426 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" event={"ID":"fb1eac23-18a5-4706-adcd-81d83e04cd12","Type":"ContainerStarted","Data":"c28f67ef999b31c369d4692770123408f63a141b8851d50df01e2ab0b1a89e5e"} Feb 16 21:06:25.690827 master-0 kubenswrapper[7926]: I0216 21:06:25.689663 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" podStartSLOduration=2.689633278 podStartE2EDuration="2.689633278s" podCreationTimestamp="2026-02-16 21:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:06:25.687942242 +0000 UTC m=+557.322842552" watchObservedRunningTime="2026-02-16 21:06:25.689633278 +0000 UTC m=+557.324533578" Feb 16 21:06:25.739005 master-0 kubenswrapper[7926]: I0216 21:06:25.738859 7926 scope.go:117] "RemoveContainer" containerID="9aebe89f00ace7757c9f12dc1f4359a915f84e8eb395e1cdeae0962c4475a4af" Feb 16 21:06:25.741509 master-0 kubenswrapper[7926]: I0216 21:06:25.741457 7926 scope.go:117] "RemoveContainer" containerID="1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32" Feb 16 21:06:25.741715 master-0 kubenswrapper[7926]: E0216 21:06:25.741681 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:06:25.759955 master-0 kubenswrapper[7926]: I0216 21:06:25.759114 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d"] Feb 16 21:06:25.765561 master-0 kubenswrapper[7926]: W0216 21:06:25.765387 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cc1da27_6eaf_4177_b2d8_1546a9d94f90.slice/crio-8e8b059d73a2c8e5ffd1f224f2251f2554ce00c13864a77bc4bd0d65d3713e02 WatchSource:0}: Error finding container 8e8b059d73a2c8e5ffd1f224f2251f2554ce00c13864a77bc4bd0d65d3713e02: Status 404 returned error can't find the container with id 8e8b059d73a2c8e5ffd1f224f2251f2554ce00c13864a77bc4bd0d65d3713e02 Feb 16 21:06:25.850288 master-0 kubenswrapper[7926]: I0216 21:06:25.850063 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw"] Feb 16 21:06:26.670552 master-0 kubenswrapper[7926]: I0216 21:06:26.670519 7926 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 21:06:26.676740 master-0 kubenswrapper[7926]: I0216 21:06:26.676701 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/3.log" Feb 16 21:06:26.676978 master-0 kubenswrapper[7926]: I0216 21:06:26.676781 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerStarted","Data":"71d2f873a3383c5d4e4ea361c9b4723201e4600cb1f7ea3ef5cecd7778b39d86"} Feb 16 21:06:26.679137 master-0 kubenswrapper[7926]: I0216 21:06:26.679109 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" event={"ID":"3403d2bf-b093-4f2e-80aa-73a3d6bcaffb","Type":"ContainerStarted","Data":"9baa14160e479c5229671fa47f287578de3e20925684ba77f76de501a6cd0a4b"} Feb 16 21:06:26.679213 master-0 kubenswrapper[7926]: I0216 21:06:26.679141 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" event={"ID":"3403d2bf-b093-4f2e-80aa-73a3d6bcaffb","Type":"ContainerStarted","Data":"75d47673076de0f457cf43f09abae17f313fa42a6b18d0c5e8749dffb9564806"} Feb 16 21:06:26.715418 master-0 kubenswrapper[7926]: I0216 21:06:26.714694 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" event={"ID":"4cc1da27-6eaf-4177-b2d8-1546a9d94f90","Type":"ContainerStarted","Data":"b5c9ef27352d95c27da1fd4de0d350f8371e4f69cc5b84960004238d748e1ab6"} Feb 16 21:06:26.715418 master-0 kubenswrapper[7926]: I0216 21:06:26.714740 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" event={"ID":"4cc1da27-6eaf-4177-b2d8-1546a9d94f90","Type":"ContainerStarted","Data":"8e8b059d73a2c8e5ffd1f224f2251f2554ce00c13864a77bc4bd0d65d3713e02"} Feb 16 21:06:26.741539 master-0 kubenswrapper[7926]: I0216 21:06:26.740966 7926 scope.go:117] "RemoveContainer" containerID="8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954" Feb 16 21:06:26.741539 master-0 kubenswrapper[7926]: E0216 21:06:26.741172 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:06:26.741539 master-0 kubenswrapper[7926]: I0216 21:06:26.741473 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" podStartSLOduration=607.741456384 podStartE2EDuration="10m7.741456384s" podCreationTimestamp="2026-02-16 20:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:06:26.73663001 +0000 UTC m=+558.371530310" watchObservedRunningTime="2026-02-16 21:06:26.741456384 +0000 UTC m=+558.376356684" Feb 16 21:06:26.759855 master-0 kubenswrapper[7926]: I0216 21:06:26.759361 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" podStartSLOduration=7.759336727 podStartE2EDuration="7.759336727s" podCreationTimestamp="2026-02-16 21:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:06:26.759334997 +0000 UTC m=+558.394235297" watchObservedRunningTime="2026-02-16 21:06:26.759336727 +0000 UTC m=+558.394237027" Feb 16 21:06:27.723939 master-0 kubenswrapper[7926]: I0216 21:06:27.723237 7926 generic.go:334] "Generic (PLEG): container finished" podID="4cc1da27-6eaf-4177-b2d8-1546a9d94f90" containerID="b5c9ef27352d95c27da1fd4de0d350f8371e4f69cc5b84960004238d748e1ab6" exitCode=0 Feb 16 21:06:27.723939 master-0 kubenswrapper[7926]: I0216 21:06:27.723325 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" event={"ID":"4cc1da27-6eaf-4177-b2d8-1546a9d94f90","Type":"ContainerDied","Data":"b5c9ef27352d95c27da1fd4de0d350f8371e4f69cc5b84960004238d748e1ab6"} Feb 16 21:06:28.732620 master-0 kubenswrapper[7926]: I0216 21:06:28.732549 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" event={"ID":"da07cd48-b1e8-4ccc-b980-84702cedb042","Type":"ContainerStarted","Data":"3f85217164f33ae361d727e56edd219159b638f9f5baaf529b0f66b008d3e74b"} Feb 16 21:06:28.733142 master-0 kubenswrapper[7926]: I0216 21:06:28.732939 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" Feb 16 21:06:28.736431 master-0 kubenswrapper[7926]: I0216 21:06:28.736369 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerStarted","Data":"822e5a1c9a45bb991d7b382a67465c6dbc014dbe9cfde42d7e3116d883653d76"} Feb 16 21:06:28.747040 master-0 kubenswrapper[7926]: I0216 21:06:28.746973 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" Feb 16 21:06:28.762526 master-0 kubenswrapper[7926]: I0216 21:06:28.762292 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" podStartSLOduration=519.006169143 podStartE2EDuration="8m41.762260156s" podCreationTimestamp="2026-02-16 20:57:47 +0000 UTC" firstStartedPulling="2026-02-16 21:06:25.505594745 +0000 UTC m=+557.140495045" lastFinishedPulling="2026-02-16 21:06:28.261685738 +0000 UTC m=+559.896586058" observedRunningTime="2026-02-16 21:06:28.758945384 +0000 UTC m=+560.393845684" watchObservedRunningTime="2026-02-16 21:06:28.762260156 +0000 UTC m=+560.397160486" Feb 16 21:06:28.783793 master-0 kubenswrapper[7926]: I0216 21:06:28.783690 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podStartSLOduration=534.70088651 podStartE2EDuration="8m57.783639296s" podCreationTimestamp="2026-02-16 20:57:31 +0000 UTC" firstStartedPulling="2026-02-16 21:06:25.219375449 +0000 UTC m=+556.854275749" lastFinishedPulling="2026-02-16 21:06:28.302128235 +0000 UTC m=+559.937028535" observedRunningTime="2026-02-16 21:06:28.779283256 +0000 UTC m=+560.414183576" watchObservedRunningTime="2026-02-16 21:06:28.783639296 +0000 UTC m=+560.418539616" Feb 16 21:06:29.071543 master-0 kubenswrapper[7926]: I0216 21:06:29.071449 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qvctv"] Feb 16 21:06:29.072567 master-0 kubenswrapper[7926]: I0216 21:06:29.072530 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:06:29.074803 master-0 kubenswrapper[7926]: I0216 21:06:29.074725 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 21:06:29.074871 master-0 kubenswrapper[7926]: I0216 21:06:29.074804 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lhkmd" Feb 16 21:06:29.074903 master-0 kubenswrapper[7926]: I0216 21:06:29.074859 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 21:06:29.083722 master-0 kubenswrapper[7926]: I0216 21:06:29.083638 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:29.094408 master-0 kubenswrapper[7926]: I0216 21:06:29.094366 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-secret-volume\") pod \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " Feb 16 21:06:29.094551 master-0 kubenswrapper[7926]: I0216 21:06:29.094417 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh9dv\" (UniqueName: \"kubernetes.io/projected/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-kube-api-access-dh9dv\") pod \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " Feb 16 21:06:29.094629 master-0 kubenswrapper[7926]: I0216 21:06:29.094592 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-node-bootstrap-token\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:06:29.100714 master-0 kubenswrapper[7926]: I0216 21:06:29.094638 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-certs\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:06:29.100995 master-0 kubenswrapper[7926]: I0216 21:06:29.100957 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvx2\" (UniqueName: \"kubernetes.io/projected/913951bb-1702-4b71-862c-a166bc7a62e0-kube-api-access-pgvx2\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:06:29.104001 master-0 kubenswrapper[7926]: I0216 21:06:29.103940 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4cc1da27-6eaf-4177-b2d8-1546a9d94f90" (UID: "4cc1da27-6eaf-4177-b2d8-1546a9d94f90"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:06:29.104097 master-0 kubenswrapper[7926]: I0216 21:06:29.103970 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-kube-api-access-dh9dv" (OuterVolumeSpecName: "kube-api-access-dh9dv") pod "4cc1da27-6eaf-4177-b2d8-1546a9d94f90" (UID: "4cc1da27-6eaf-4177-b2d8-1546a9d94f90"). InnerVolumeSpecName "kube-api-access-dh9dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:06:29.183121 master-0 kubenswrapper[7926]: I0216 21:06:29.183082 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:29.185898 master-0 kubenswrapper[7926]: I0216 21:06:29.185867 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:29.185898 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:29.185898 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:29.185898 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:29.186185 master-0 kubenswrapper[7926]: I0216 21:06:29.186156 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:29.202407 master-0 kubenswrapper[7926]: I0216 21:06:29.202354 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-config-volume\") pod \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\" (UID: \"4cc1da27-6eaf-4177-b2d8-1546a9d94f90\") " Feb 16 21:06:29.202753 master-0 kubenswrapper[7926]: I0216 21:06:29.202662 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-node-bootstrap-token\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:06:29.202830 master-0 kubenswrapper[7926]: I0216 21:06:29.202761 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-certs\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:06:29.202830 master-0 kubenswrapper[7926]: I0216 21:06:29.202760 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-config-volume" (OuterVolumeSpecName: "config-volume") pod "4cc1da27-6eaf-4177-b2d8-1546a9d94f90" (UID: "4cc1da27-6eaf-4177-b2d8-1546a9d94f90"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:06:29.202830 master-0 kubenswrapper[7926]: I0216 21:06:29.202781 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgvx2\" (UniqueName: \"kubernetes.io/projected/913951bb-1702-4b71-862c-a166bc7a62e0-kube-api-access-pgvx2\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:06:29.202830 master-0 kubenswrapper[7926]: I0216 21:06:29.202828 7926 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 21:06:29.203012 master-0 kubenswrapper[7926]: I0216 21:06:29.202839 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh9dv\" (UniqueName: \"kubernetes.io/projected/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-kube-api-access-dh9dv\") on node \"master-0\" DevicePath \"\"" Feb 16 21:06:29.203012 master-0 kubenswrapper[7926]: I0216 21:06:29.202849 7926 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cc1da27-6eaf-4177-b2d8-1546a9d94f90-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 21:06:29.209877 master-0 kubenswrapper[7926]: I0216 21:06:29.209787 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-certs\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:06:29.211849 master-0 kubenswrapper[7926]: I0216 21:06:29.211815 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-node-bootstrap-token\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:06:29.220884 master-0 kubenswrapper[7926]: I0216 21:06:29.220813 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgvx2\" (UniqueName: \"kubernetes.io/projected/913951bb-1702-4b71-862c-a166bc7a62e0-kube-api-access-pgvx2\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:06:29.361460 master-0 kubenswrapper[7926]: I0216 21:06:29.361331 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-9xc4n"] Feb 16 21:06:29.361997 master-0 kubenswrapper[7926]: E0216 21:06:29.361974 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc1da27-6eaf-4177-b2d8-1546a9d94f90" containerName="collect-profiles" Feb 16 21:06:29.361997 master-0 kubenswrapper[7926]: I0216 21:06:29.361994 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc1da27-6eaf-4177-b2d8-1546a9d94f90" containerName="collect-profiles" Feb 16 21:06:29.362331 master-0 kubenswrapper[7926]: I0216 21:06:29.362309 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc1da27-6eaf-4177-b2d8-1546a9d94f90" containerName="collect-profiles" Feb 16 21:06:29.363531 master-0 kubenswrapper[7926]: I0216 21:06:29.363507 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.368705 master-0 kubenswrapper[7926]: I0216 21:06:29.367060 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 21:06:29.368705 master-0 kubenswrapper[7926]: I0216 21:06:29.367183 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-pt7pr" Feb 16 21:06:29.368705 master-0 kubenswrapper[7926]: I0216 21:06:29.367518 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 21:06:29.368705 master-0 kubenswrapper[7926]: I0216 21:06:29.367589 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 21:06:29.398227 master-0 kubenswrapper[7926]: I0216 21:06:29.398043 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-9xc4n"] Feb 16 21:06:29.408351 master-0 kubenswrapper[7926]: I0216 21:06:29.408277 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.408491 master-0 kubenswrapper[7926]: I0216 21:06:29.408389 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.408491 master-0 kubenswrapper[7926]: I0216 21:06:29.408441 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98n4h\" (UniqueName: \"kubernetes.io/projected/a0b7a368-1408-4fc3-ae25-4613b74e7fca-kube-api-access-98n4h\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.408584 master-0 kubenswrapper[7926]: I0216 21:06:29.408485 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b7a368-1408-4fc3-ae25-4613b74e7fca-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.419721 master-0 kubenswrapper[7926]: I0216 21:06:29.419669 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:06:29.440054 master-0 kubenswrapper[7926]: W0216 21:06:29.439963 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod913951bb_1702_4b71_862c_a166bc7a62e0.slice/crio-404fdd69be202f40aeca377d1ba146b346077a53f8e7897ed4e324403366c1bf WatchSource:0}: Error finding container 404fdd69be202f40aeca377d1ba146b346077a53f8e7897ed4e324403366c1bf: Status 404 returned error can't find the container with id 404fdd69be202f40aeca377d1ba146b346077a53f8e7897ed4e324403366c1bf Feb 16 21:06:29.510494 master-0 kubenswrapper[7926]: I0216 21:06:29.510453 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98n4h\" (UniqueName: \"kubernetes.io/projected/a0b7a368-1408-4fc3-ae25-4613b74e7fca-kube-api-access-98n4h\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.510712 master-0 kubenswrapper[7926]: I0216 21:06:29.510690 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b7a368-1408-4fc3-ae25-4613b74e7fca-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.510875 master-0 kubenswrapper[7926]: I0216 21:06:29.510854 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.511116 master-0 kubenswrapper[7926]: I0216 21:06:29.511097 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.511567 master-0 kubenswrapper[7926]: E0216 21:06:29.511368 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:06:29.511852 master-0 kubenswrapper[7926]: E0216 21:06:29.511704 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:06:30.011527393 +0000 UTC m=+561.646427703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:06:29.514278 master-0 kubenswrapper[7926]: I0216 21:06:29.514206 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.515212 master-0 kubenswrapper[7926]: I0216 21:06:29.515154 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b7a368-1408-4fc3-ae25-4613b74e7fca-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.528581 master-0 kubenswrapper[7926]: I0216 21:06:29.528543 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98n4h\" (UniqueName: \"kubernetes.io/projected/a0b7a368-1408-4fc3-ae25-4613b74e7fca-kube-api-access-98n4h\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:29.745141 master-0 kubenswrapper[7926]: I0216 21:06:29.745064 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qvctv" event={"ID":"913951bb-1702-4b71-862c-a166bc7a62e0","Type":"ContainerStarted","Data":"9774080b01608f0a21e73d69c46adab19d9597a4bd78784da71dd2c1e0272836"} Feb 16 21:06:29.745750 master-0 kubenswrapper[7926]: I0216 21:06:29.745157 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qvctv" event={"ID":"913951bb-1702-4b71-862c-a166bc7a62e0","Type":"ContainerStarted","Data":"404fdd69be202f40aeca377d1ba146b346077a53f8e7897ed4e324403366c1bf"} Feb 16 21:06:29.748559 master-0 kubenswrapper[7926]: I0216 21:06:29.748394 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" event={"ID":"4cc1da27-6eaf-4177-b2d8-1546a9d94f90","Type":"ContainerDied","Data":"8e8b059d73a2c8e5ffd1f224f2251f2554ce00c13864a77bc4bd0d65d3713e02"} Feb 16 21:06:29.748559 master-0 kubenswrapper[7926]: I0216 21:06:29.748475 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8b059d73a2c8e5ffd1f224f2251f2554ce00c13864a77bc4bd0d65d3713e02" Feb 16 21:06:29.748559 master-0 kubenswrapper[7926]: I0216 21:06:29.748469 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:06:29.768325 master-0 kubenswrapper[7926]: I0216 21:06:29.768220 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qvctv" podStartSLOduration=0.768182112 podStartE2EDuration="768.182112ms" podCreationTimestamp="2026-02-16 21:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:06:29.763998947 +0000 UTC m=+561.398899307" watchObservedRunningTime="2026-02-16 21:06:29.768182112 +0000 UTC m=+561.403082452" Feb 16 21:06:30.018591 master-0 kubenswrapper[7926]: I0216 21:06:30.018510 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:30.018850 master-0 kubenswrapper[7926]: E0216 21:06:30.018805 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:06:30.018963 master-0 kubenswrapper[7926]: E0216 21:06:30.018929 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:06:31.018900668 +0000 UTC m=+562.653801008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:06:30.185927 master-0 kubenswrapper[7926]: I0216 21:06:30.185857 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:30.185927 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:30.185927 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:30.185927 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:30.185927 master-0 kubenswrapper[7926]: I0216 21:06:30.185937 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:31.033812 master-0 kubenswrapper[7926]: I0216 21:06:31.033730 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:31.034484 master-0 kubenswrapper[7926]: E0216 21:06:31.033971 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:06:31.034484 master-0 kubenswrapper[7926]: E0216 21:06:31.034120 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:06:33.034086361 +0000 UTC m=+564.668986691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:06:31.186200 master-0 kubenswrapper[7926]: I0216 21:06:31.186048 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:31.186200 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:31.186200 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:31.186200 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:31.186200 master-0 kubenswrapper[7926]: I0216 21:06:31.186178 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:32.185417 master-0 kubenswrapper[7926]: I0216 21:06:32.185340 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:32.185417 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:32.185417 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:32.185417 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:32.185417 master-0 kubenswrapper[7926]: I0216 21:06:32.185408 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:33.062311 master-0 kubenswrapper[7926]: I0216 21:06:33.062223 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:33.062694 master-0 kubenswrapper[7926]: E0216 21:06:33.062536 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:06:33.062840 master-0 kubenswrapper[7926]: E0216 21:06:33.062828 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:06:37.062806451 +0000 UTC m=+568.697706751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:06:33.184622 master-0 kubenswrapper[7926]: I0216 21:06:33.184554 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:33.184622 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:33.184622 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:33.184622 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:33.184622 master-0 kubenswrapper[7926]: I0216 21:06:33.184625 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:34.185838 master-0 kubenswrapper[7926]: I0216 21:06:34.185744 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:34.185838 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:34.185838 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:34.185838 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:34.186519 master-0 kubenswrapper[7926]: I0216 21:06:34.185841 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:35.183518 master-0 kubenswrapper[7926]: I0216 21:06:35.183470 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:06:35.187191 master-0 kubenswrapper[7926]: I0216 21:06:35.187158 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:35.187191 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:35.187191 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:35.187191 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:35.187626 master-0 kubenswrapper[7926]: I0216 21:06:35.187196 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:36.186061 master-0 kubenswrapper[7926]: I0216 21:06:36.185976 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:36.186061 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:36.186061 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:36.186061 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:36.186431 master-0 kubenswrapper[7926]: I0216 21:06:36.186085 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:37.122678 master-0 kubenswrapper[7926]: I0216 21:06:37.122605 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:37.123209 master-0 kubenswrapper[7926]: E0216 21:06:37.122813 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:06:37.123209 master-0 kubenswrapper[7926]: E0216 21:06:37.122872 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:06:45.122855205 +0000 UTC m=+576.757755505 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:06:37.186054 master-0 kubenswrapper[7926]: I0216 21:06:37.185988 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:37.186054 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:37.186054 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:37.186054 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:37.186376 master-0 kubenswrapper[7926]: I0216 21:06:37.186066 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:38.184937 master-0 kubenswrapper[7926]: I0216 21:06:38.184874 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:38.184937 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:38.184937 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:38.184937 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:38.185457 master-0 kubenswrapper[7926]: I0216 21:06:38.184951 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:39.185047 master-0 kubenswrapper[7926]: I0216 21:06:39.184948 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:39.185047 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:39.185047 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:39.185047 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:39.185760 master-0 kubenswrapper[7926]: I0216 21:06:39.185075 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:39.739000 master-0 kubenswrapper[7926]: I0216 21:06:39.738880 7926 scope.go:117] "RemoveContainer" containerID="1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32" Feb 16 21:06:39.739521 master-0 kubenswrapper[7926]: I0216 21:06:39.739076 7926 scope.go:117] "RemoveContainer" containerID="8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954" Feb 16 21:06:39.739521 master-0 kubenswrapper[7926]: E0216 21:06:39.739394 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:06:39.739521 master-0 kubenswrapper[7926]: E0216 21:06:39.739457 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:06:40.185718 master-0 kubenswrapper[7926]: I0216 21:06:40.185548 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:40.185718 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:40.185718 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:40.185718 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:40.186880 master-0 kubenswrapper[7926]: I0216 21:06:40.185720 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:41.185568 master-0 kubenswrapper[7926]: I0216 21:06:41.185507 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:41.185568 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:41.185568 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:41.185568 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:41.187137 master-0 kubenswrapper[7926]: I0216 21:06:41.187088 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:42.185186 master-0 kubenswrapper[7926]: I0216 21:06:42.185099 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:42.185186 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:42.185186 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:42.185186 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:42.185567 master-0 kubenswrapper[7926]: I0216 21:06:42.185198 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:43.185949 master-0 kubenswrapper[7926]: I0216 21:06:43.185893 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:43.185949 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:43.185949 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:43.185949 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:43.186627 master-0 kubenswrapper[7926]: I0216 21:06:43.185956 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:44.189249 master-0 kubenswrapper[7926]: I0216 21:06:44.189173 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:44.189249 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:44.189249 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:44.189249 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:44.189911 master-0 kubenswrapper[7926]: I0216 21:06:44.189270 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:45.137214 master-0 kubenswrapper[7926]: I0216 21:06:45.137096 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:06:45.137601 master-0 kubenswrapper[7926]: E0216 21:06:45.137379 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:06:45.137601 master-0 kubenswrapper[7926]: E0216 21:06:45.137492 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:07:01.137462373 +0000 UTC m=+592.772362703 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:06:45.184980 master-0 kubenswrapper[7926]: I0216 21:06:45.184911 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:45.184980 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:45.184980 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:45.184980 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:45.185333 master-0 kubenswrapper[7926]: I0216 21:06:45.184984 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:46.185603 master-0 kubenswrapper[7926]: I0216 21:06:46.185510 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:46.185603 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:46.185603 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:46.185603 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:46.186433 master-0 kubenswrapper[7926]: I0216 21:06:46.185638 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:47.185933 master-0 kubenswrapper[7926]: I0216 21:06:47.185830 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:47.185933 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:47.185933 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:47.185933 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:47.186923 master-0 kubenswrapper[7926]: I0216 21:06:47.185959 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:48.185421 master-0 kubenswrapper[7926]: I0216 21:06:48.185310 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:48.185421 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:48.185421 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:48.185421 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:48.185421 master-0 kubenswrapper[7926]: I0216 21:06:48.185385 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:49.185516 master-0 kubenswrapper[7926]: I0216 21:06:49.185447 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:49.185516 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:49.185516 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:49.185516 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:49.185869 master-0 kubenswrapper[7926]: I0216 21:06:49.185540 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:50.185704 master-0 kubenswrapper[7926]: I0216 21:06:50.185543 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:50.185704 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:50.185704 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:50.185704 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:50.185704 master-0 kubenswrapper[7926]: I0216 21:06:50.185714 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:51.186558 master-0 kubenswrapper[7926]: I0216 21:06:51.186468 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:51.186558 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:51.186558 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:51.186558 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:51.187482 master-0 kubenswrapper[7926]: I0216 21:06:51.186605 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:51.738366 master-0 kubenswrapper[7926]: I0216 21:06:51.738286 7926 scope.go:117] "RemoveContainer" containerID="8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954" Feb 16 21:06:51.738820 master-0 kubenswrapper[7926]: E0216 21:06:51.738635 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=route-controller-manager pod=route-controller-manager-749ccd9c56-wzsnf_openshift-route-controller-manager(4db59450-da78-4879-ada8-ca3fc49fb7a7)\"" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" Feb 16 21:06:52.185335 master-0 kubenswrapper[7926]: I0216 21:06:52.185219 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:52.185335 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:52.185335 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:52.185335 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:52.185335 master-0 kubenswrapper[7926]: I0216 21:06:52.185311 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:52.738748 master-0 kubenswrapper[7926]: I0216 21:06:52.738507 7926 scope.go:117] "RemoveContainer" containerID="1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32" Feb 16 21:06:53.184416 master-0 kubenswrapper[7926]: I0216 21:06:53.184359 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:53.184416 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:53.184416 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:53.184416 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:53.184748 master-0 kubenswrapper[7926]: I0216 21:06:53.184437 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:53.916452 master-0 kubenswrapper[7926]: I0216 21:06:53.916397 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/5.log" Feb 16 21:06:53.917076 master-0 kubenswrapper[7926]: I0216 21:06:53.916478 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerStarted","Data":"cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba"} Feb 16 21:06:54.186004 master-0 kubenswrapper[7926]: I0216 21:06:54.185894 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:54.186004 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:54.186004 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:54.186004 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:54.186348 master-0 kubenswrapper[7926]: I0216 21:06:54.186317 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:55.186267 master-0 kubenswrapper[7926]: I0216 21:06:55.186207 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:55.186267 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:55.186267 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:55.186267 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:55.187493 master-0 kubenswrapper[7926]: I0216 21:06:55.187444 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:56.186926 master-0 kubenswrapper[7926]: I0216 21:06:56.186834 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:56.186926 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:56.186926 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:56.186926 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:56.188187 master-0 kubenswrapper[7926]: I0216 21:06:56.186934 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:57.185817 master-0 kubenswrapper[7926]: I0216 21:06:57.185742 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:57.185817 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:57.185817 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:57.185817 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:57.186145 master-0 kubenswrapper[7926]: I0216 21:06:57.185818 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:58.185472 master-0 kubenswrapper[7926]: I0216 21:06:58.185393 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:58.185472 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:58.185472 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:58.185472 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:58.185472 master-0 kubenswrapper[7926]: I0216 21:06:58.185470 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:06:59.184806 master-0 kubenswrapper[7926]: I0216 21:06:59.184749 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:06:59.184806 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:06:59.184806 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:06:59.184806 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:06:59.185104 master-0 kubenswrapper[7926]: I0216 21:06:59.184826 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:00.184925 master-0 kubenswrapper[7926]: I0216 21:07:00.184857 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:00.184925 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:00.184925 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:00.184925 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:00.185548 master-0 kubenswrapper[7926]: I0216 21:07:00.184940 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:01.170714 master-0 kubenswrapper[7926]: I0216 21:07:01.170532 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:07:01.171056 master-0 kubenswrapper[7926]: E0216 21:07:01.170802 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:07:01.171056 master-0 kubenswrapper[7926]: E0216 21:07:01.170949 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:07:33.17091708 +0000 UTC m=+624.805817420 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:07:01.185883 master-0 kubenswrapper[7926]: I0216 21:07:01.185804 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:01.185883 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:01.185883 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:01.185883 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:01.186703 master-0 kubenswrapper[7926]: I0216 21:07:01.185881 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:02.184783 master-0 kubenswrapper[7926]: I0216 21:07:02.184708 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:02.184783 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:02.184783 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:02.184783 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:02.185216 master-0 kubenswrapper[7926]: I0216 21:07:02.184786 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:03.186172 master-0 kubenswrapper[7926]: I0216 21:07:03.186062 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:03.186172 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:03.186172 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:03.186172 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:03.186172 master-0 kubenswrapper[7926]: I0216 21:07:03.186164 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:04.185400 master-0 kubenswrapper[7926]: I0216 21:07:04.185279 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:04.185400 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:04.185400 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:04.185400 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:04.185400 master-0 kubenswrapper[7926]: I0216 21:07:04.185389 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:04.999514 master-0 kubenswrapper[7926]: I0216 21:07:04.999433 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/2.log" Feb 16 21:07:05.000296 master-0 kubenswrapper[7926]: I0216 21:07:05.000254 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/1.log" Feb 16 21:07:05.000909 master-0 kubenswrapper[7926]: I0216 21:07:05.000856 7926 generic.go:334] "Generic (PLEG): container finished" podID="cef33294-81fb-41a2-811d-2565f94514d1" containerID="50720d9ad3b3ea70d85acc6454761164cbe913fb0f9ca263fc8b50f0bd5f848c" exitCode=1 Feb 16 21:07:05.001023 master-0 kubenswrapper[7926]: I0216 21:07:05.000911 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerDied","Data":"50720d9ad3b3ea70d85acc6454761164cbe913fb0f9ca263fc8b50f0bd5f848c"} Feb 16 21:07:05.001023 master-0 kubenswrapper[7926]: I0216 21:07:05.000948 7926 scope.go:117] "RemoveContainer" containerID="5b1674388d3a0d8fb07d284207cc23840a32ef17ddc0f1ef774d2188e32d3e84" Feb 16 21:07:05.001715 master-0 kubenswrapper[7926]: I0216 21:07:05.001608 7926 scope.go:117] "RemoveContainer" containerID="50720d9ad3b3ea70d85acc6454761164cbe913fb0f9ca263fc8b50f0bd5f848c" Feb 16 21:07:05.002069 master-0 kubenswrapper[7926]: E0216 21:07:05.002024 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:07:05.185191 master-0 kubenswrapper[7926]: I0216 21:07:05.185113 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:05.185191 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:05.185191 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:05.185191 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:05.185191 master-0 kubenswrapper[7926]: I0216 21:07:05.185186 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:05.739284 master-0 kubenswrapper[7926]: I0216 21:07:05.739220 7926 scope.go:117] "RemoveContainer" containerID="8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954" Feb 16 21:07:06.010497 master-0 kubenswrapper[7926]: I0216 21:07:06.010457 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/4.log" Feb 16 21:07:06.011077 master-0 kubenswrapper[7926]: I0216 21:07:06.010579 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerStarted","Data":"76be2fb9017c6c391da7666ee8357be5d76c275a9752c228eacdc1c1d9610f90"} Feb 16 21:07:06.011231 master-0 kubenswrapper[7926]: I0216 21:07:06.011158 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:07:06.012896 master-0 kubenswrapper[7926]: I0216 21:07:06.012863 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/2.log" Feb 16 21:07:06.185636 master-0 kubenswrapper[7926]: I0216 21:07:06.185549 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:06.185636 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:06.185636 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:06.185636 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:06.186011 master-0 kubenswrapper[7926]: I0216 21:07:06.185684 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:06.216544 master-0 kubenswrapper[7926]: I0216 21:07:06.216443 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:07:07.219894 master-0 kubenswrapper[7926]: I0216 21:07:07.219810 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:07.219894 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:07.219894 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:07.219894 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:07.221069 master-0 kubenswrapper[7926]: I0216 21:07:07.219911 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:08.185571 master-0 kubenswrapper[7926]: I0216 21:07:08.185431 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:08.185571 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:08.185571 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:08.185571 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:08.185571 master-0 kubenswrapper[7926]: I0216 21:07:08.185550 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:09.185452 master-0 kubenswrapper[7926]: I0216 21:07:09.185355 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:09.185452 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:09.185452 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:09.185452 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:09.186448 master-0 kubenswrapper[7926]: I0216 21:07:09.185462 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:10.185562 master-0 kubenswrapper[7926]: I0216 21:07:10.185482 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:10.185562 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:10.185562 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:10.185562 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:10.185562 master-0 kubenswrapper[7926]: I0216 21:07:10.185550 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:11.186714 master-0 kubenswrapper[7926]: I0216 21:07:11.186127 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:11.186714 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:11.186714 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:11.186714 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:11.186714 master-0 kubenswrapper[7926]: I0216 21:07:11.186219 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:12.185481 master-0 kubenswrapper[7926]: I0216 21:07:12.185413 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:12.185481 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:12.185481 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:12.185481 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:12.186183 master-0 kubenswrapper[7926]: I0216 21:07:12.186145 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:13.187738 master-0 kubenswrapper[7926]: I0216 21:07:13.187631 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:13.187738 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:13.187738 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:13.187738 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:13.188829 master-0 kubenswrapper[7926]: I0216 21:07:13.187758 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:14.185753 master-0 kubenswrapper[7926]: I0216 21:07:14.185686 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:14.185753 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:14.185753 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:14.185753 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:14.186004 master-0 kubenswrapper[7926]: I0216 21:07:14.185786 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:15.184109 master-0 kubenswrapper[7926]: I0216 21:07:15.184039 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:15.184109 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:15.184109 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:15.184109 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:15.184805 master-0 kubenswrapper[7926]: I0216 21:07:15.184113 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:15.739342 master-0 kubenswrapper[7926]: I0216 21:07:15.739268 7926 scope.go:117] "RemoveContainer" containerID="50720d9ad3b3ea70d85acc6454761164cbe913fb0f9ca263fc8b50f0bd5f848c" Feb 16 21:07:15.739769 master-0 kubenswrapper[7926]: E0216 21:07:15.739728 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:07:16.185891 master-0 kubenswrapper[7926]: I0216 21:07:16.185808 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:16.185891 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:16.185891 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:16.185891 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:16.186742 master-0 kubenswrapper[7926]: I0216 21:07:16.185915 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:17.184835 master-0 kubenswrapper[7926]: I0216 21:07:17.184778 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:17.184835 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:17.184835 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:17.184835 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:17.185111 master-0 kubenswrapper[7926]: I0216 21:07:17.184845 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:18.186528 master-0 kubenswrapper[7926]: I0216 21:07:18.186455 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:18.186528 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:18.186528 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:18.186528 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:18.187163 master-0 kubenswrapper[7926]: I0216 21:07:18.186583 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:19.184622 master-0 kubenswrapper[7926]: I0216 21:07:19.184574 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:19.184622 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:19.184622 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:19.184622 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:19.184890 master-0 kubenswrapper[7926]: I0216 21:07:19.184637 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:20.186043 master-0 kubenswrapper[7926]: I0216 21:07:20.185991 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:20.186043 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:20.186043 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:20.186043 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:20.186822 master-0 kubenswrapper[7926]: I0216 21:07:20.186096 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:21.185627 master-0 kubenswrapper[7926]: I0216 21:07:21.185539 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:21.185627 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:21.185627 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:21.185627 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:21.185627 master-0 kubenswrapper[7926]: I0216 21:07:21.185624 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:22.185116 master-0 kubenswrapper[7926]: I0216 21:07:22.185050 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:22.185116 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:22.185116 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:22.185116 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:22.185393 master-0 kubenswrapper[7926]: I0216 21:07:22.185162 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:23.185876 master-0 kubenswrapper[7926]: I0216 21:07:23.185308 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:23.185876 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:23.185876 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:23.185876 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:23.185876 master-0 kubenswrapper[7926]: I0216 21:07:23.185387 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:24.185264 master-0 kubenswrapper[7926]: I0216 21:07:24.185174 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:24.185264 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:24.185264 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:24.185264 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:24.185726 master-0 kubenswrapper[7926]: I0216 21:07:24.185262 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:25.185703 master-0 kubenswrapper[7926]: I0216 21:07:25.185571 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:25.185703 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:25.185703 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:25.185703 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:25.185703 master-0 kubenswrapper[7926]: I0216 21:07:25.185680 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:26.186174 master-0 kubenswrapper[7926]: I0216 21:07:26.186070 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:26.186174 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:26.186174 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:26.186174 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:26.187501 master-0 kubenswrapper[7926]: I0216 21:07:26.186187 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:27.186015 master-0 kubenswrapper[7926]: I0216 21:07:27.185945 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:27.186015 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:27.186015 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:27.186015 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:27.186728 master-0 kubenswrapper[7926]: I0216 21:07:27.186022 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:28.186350 master-0 kubenswrapper[7926]: I0216 21:07:28.186184 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:28.186350 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:28.186350 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:28.186350 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:28.186350 master-0 kubenswrapper[7926]: I0216 21:07:28.186331 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:29.185188 master-0 kubenswrapper[7926]: I0216 21:07:29.185121 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:29.185188 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:29.185188 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:29.185188 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:29.185522 master-0 kubenswrapper[7926]: I0216 21:07:29.185210 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:30.186244 master-0 kubenswrapper[7926]: I0216 21:07:30.186145 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:30.186244 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:30.186244 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:30.186244 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:30.187422 master-0 kubenswrapper[7926]: I0216 21:07:30.186239 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:30.738448 master-0 kubenswrapper[7926]: I0216 21:07:30.738360 7926 scope.go:117] "RemoveContainer" containerID="50720d9ad3b3ea70d85acc6454761164cbe913fb0f9ca263fc8b50f0bd5f848c" Feb 16 21:07:31.187625 master-0 kubenswrapper[7926]: I0216 21:07:31.187507 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:31.187625 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:31.187625 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:31.187625 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:31.188689 master-0 kubenswrapper[7926]: I0216 21:07:31.187698 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:31.189624 master-0 kubenswrapper[7926]: I0216 21:07:31.189581 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/2.log" Feb 16 21:07:31.190252 master-0 kubenswrapper[7926]: I0216 21:07:31.190175 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"653d95653081a7f3f8351ba7eaf8e2a8cf9f5394f19ac7bd13b4a971322691eb"} Feb 16 21:07:32.185090 master-0 kubenswrapper[7926]: I0216 21:07:32.184992 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:32.185090 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:32.185090 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:32.185090 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:32.185090 master-0 kubenswrapper[7926]: I0216 21:07:32.185083 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:33.185313 master-0 kubenswrapper[7926]: I0216 21:07:33.185253 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:33.185313 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:33.185313 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:33.185313 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:33.185313 master-0 kubenswrapper[7926]: I0216 21:07:33.185313 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:33.236275 master-0 kubenswrapper[7926]: I0216 21:07:33.236212 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:07:33.236505 master-0 kubenswrapper[7926]: E0216 21:07:33.236399 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:07:33.236505 master-0 kubenswrapper[7926]: E0216 21:07:33.236477 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:08:37.236457483 +0000 UTC m=+688.871357793 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:07:34.185140 master-0 kubenswrapper[7926]: I0216 21:07:34.185066 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:34.185140 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:34.185140 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:34.185140 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:34.185847 master-0 kubenswrapper[7926]: I0216 21:07:34.185141 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:35.184974 master-0 kubenswrapper[7926]: I0216 21:07:35.184910 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:35.184974 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:35.184974 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:35.184974 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:35.185355 master-0 kubenswrapper[7926]: I0216 21:07:35.184979 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:36.186093 master-0 kubenswrapper[7926]: I0216 21:07:36.185980 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:36.186093 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:36.186093 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:36.186093 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:36.186779 master-0 kubenswrapper[7926]: I0216 21:07:36.186114 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:37.186047 master-0 kubenswrapper[7926]: I0216 21:07:37.185956 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:37.186047 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:37.186047 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:37.186047 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:37.186047 master-0 kubenswrapper[7926]: I0216 21:07:37.186023 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:38.185746 master-0 kubenswrapper[7926]: I0216 21:07:38.185642 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:38.185746 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:38.185746 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:38.185746 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:38.186171 master-0 kubenswrapper[7926]: I0216 21:07:38.185774 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:39.185795 master-0 kubenswrapper[7926]: I0216 21:07:39.185740 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:39.185795 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:39.185795 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:39.185795 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:39.186852 master-0 kubenswrapper[7926]: I0216 21:07:39.185805 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:40.190180 master-0 kubenswrapper[7926]: I0216 21:07:40.190110 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:40.190180 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:40.190180 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:40.190180 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:40.191166 master-0 kubenswrapper[7926]: I0216 21:07:40.190194 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:41.185993 master-0 kubenswrapper[7926]: I0216 21:07:41.185901 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:41.185993 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:41.185993 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:41.185993 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:41.185993 master-0 kubenswrapper[7926]: I0216 21:07:41.185981 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:42.185542 master-0 kubenswrapper[7926]: I0216 21:07:42.185441 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:42.185542 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:42.185542 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:42.185542 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:42.186627 master-0 kubenswrapper[7926]: I0216 21:07:42.185533 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:43.186282 master-0 kubenswrapper[7926]: I0216 21:07:43.186176 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:43.186282 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:43.186282 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:43.186282 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:43.186282 master-0 kubenswrapper[7926]: I0216 21:07:43.186251 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:44.185041 master-0 kubenswrapper[7926]: I0216 21:07:44.184949 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:44.185041 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:44.185041 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:44.185041 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:44.185357 master-0 kubenswrapper[7926]: I0216 21:07:44.185053 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:45.184572 master-0 kubenswrapper[7926]: I0216 21:07:45.184495 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:45.184572 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:45.184572 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:45.184572 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:45.184572 master-0 kubenswrapper[7926]: I0216 21:07:45.184552 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:46.185179 master-0 kubenswrapper[7926]: I0216 21:07:46.185077 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:46.185179 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:46.185179 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:46.185179 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:46.185179 master-0 kubenswrapper[7926]: I0216 21:07:46.185179 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:47.187540 master-0 kubenswrapper[7926]: I0216 21:07:47.187391 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:47.187540 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:47.187540 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:47.187540 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:47.187540 master-0 kubenswrapper[7926]: I0216 21:07:47.187492 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:48.185520 master-0 kubenswrapper[7926]: I0216 21:07:48.185459 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:48.185520 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:48.185520 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:48.185520 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:48.185783 master-0 kubenswrapper[7926]: I0216 21:07:48.185545 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:49.185950 master-0 kubenswrapper[7926]: I0216 21:07:49.185870 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:49.185950 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:49.185950 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:49.185950 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:49.186514 master-0 kubenswrapper[7926]: I0216 21:07:49.185976 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:50.184378 master-0 kubenswrapper[7926]: I0216 21:07:50.184328 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:50.184378 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:50.184378 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:50.184378 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:50.184758 master-0 kubenswrapper[7926]: I0216 21:07:50.184403 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:51.189440 master-0 kubenswrapper[7926]: I0216 21:07:51.189355 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:51.189440 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:51.189440 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:51.189440 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:51.190165 master-0 kubenswrapper[7926]: I0216 21:07:51.189443 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:52.185507 master-0 kubenswrapper[7926]: I0216 21:07:52.185425 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:52.185507 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:52.185507 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:52.185507 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:52.185844 master-0 kubenswrapper[7926]: I0216 21:07:52.185510 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:53.185643 master-0 kubenswrapper[7926]: I0216 21:07:53.185584 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:53.185643 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:53.185643 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:53.185643 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:53.186599 master-0 kubenswrapper[7926]: I0216 21:07:53.185682 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:54.185303 master-0 kubenswrapper[7926]: I0216 21:07:54.185182 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:54.185303 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:54.185303 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:54.185303 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:54.185303 master-0 kubenswrapper[7926]: I0216 21:07:54.185287 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:55.185888 master-0 kubenswrapper[7926]: I0216 21:07:55.185830 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:55.185888 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:55.185888 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:55.185888 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:55.186415 master-0 kubenswrapper[7926]: I0216 21:07:55.185914 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:56.185301 master-0 kubenswrapper[7926]: I0216 21:07:56.185210 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:56.185301 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:56.185301 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:56.185301 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:56.185705 master-0 kubenswrapper[7926]: I0216 21:07:56.185300 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:57.185349 master-0 kubenswrapper[7926]: I0216 21:07:57.185261 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:57.185349 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:57.185349 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:57.185349 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:57.186124 master-0 kubenswrapper[7926]: I0216 21:07:57.185351 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:58.185678 master-0 kubenswrapper[7926]: I0216 21:07:58.185569 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:58.185678 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:58.185678 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:58.185678 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:58.186371 master-0 kubenswrapper[7926]: I0216 21:07:58.185704 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:07:59.185889 master-0 kubenswrapper[7926]: I0216 21:07:59.185809 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:07:59.185889 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:07:59.185889 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:07:59.185889 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:07:59.186880 master-0 kubenswrapper[7926]: I0216 21:07:59.185909 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:00.186438 master-0 kubenswrapper[7926]: I0216 21:08:00.186328 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:00.186438 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:00.186438 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:00.186438 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:00.186438 master-0 kubenswrapper[7926]: I0216 21:08:00.186423 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:01.185759 master-0 kubenswrapper[7926]: I0216 21:08:01.185663 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:01.185759 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:01.185759 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:01.185759 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:01.185759 master-0 kubenswrapper[7926]: I0216 21:08:01.185725 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:02.184696 master-0 kubenswrapper[7926]: I0216 21:08:02.184571 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:02.184696 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:02.184696 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:02.184696 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:02.185960 master-0 kubenswrapper[7926]: I0216 21:08:02.184731 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:03.186062 master-0 kubenswrapper[7926]: I0216 21:08:03.185968 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:03.186062 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:03.186062 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:03.186062 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:03.187406 master-0 kubenswrapper[7926]: I0216 21:08:03.186066 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:04.186180 master-0 kubenswrapper[7926]: I0216 21:08:04.186063 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:04.186180 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:04.186180 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:04.186180 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:04.186180 master-0 kubenswrapper[7926]: I0216 21:08:04.186166 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:05.185037 master-0 kubenswrapper[7926]: I0216 21:08:05.184959 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:05.185037 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:05.185037 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:05.185037 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:05.185843 master-0 kubenswrapper[7926]: I0216 21:08:05.185781 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:06.186837 master-0 kubenswrapper[7926]: I0216 21:08:06.186750 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:06.186837 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:06.186837 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:06.186837 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:06.188255 master-0 kubenswrapper[7926]: I0216 21:08:06.186847 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:07.187208 master-0 kubenswrapper[7926]: I0216 21:08:07.187032 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:07.187208 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:07.187208 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:07.187208 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:07.188571 master-0 kubenswrapper[7926]: I0216 21:08:07.187235 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:08.185537 master-0 kubenswrapper[7926]: I0216 21:08:08.185417 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:08.185537 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:08.185537 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:08.185537 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:08.186184 master-0 kubenswrapper[7926]: I0216 21:08:08.185558 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:09.185986 master-0 kubenswrapper[7926]: I0216 21:08:09.185898 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:09.185986 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:09.185986 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:09.185986 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:09.186787 master-0 kubenswrapper[7926]: I0216 21:08:09.186039 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:10.184163 master-0 kubenswrapper[7926]: I0216 21:08:10.184078 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:10.184163 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:10.184163 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:10.184163 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:10.184487 master-0 kubenswrapper[7926]: I0216 21:08:10.184173 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:11.186251 master-0 kubenswrapper[7926]: I0216 21:08:11.186174 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:11.186251 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:11.186251 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:11.186251 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:11.186251 master-0 kubenswrapper[7926]: I0216 21:08:11.186248 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:12.186311 master-0 kubenswrapper[7926]: I0216 21:08:12.186234 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:12.186311 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:12.186311 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:12.186311 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:12.186898 master-0 kubenswrapper[7926]: I0216 21:08:12.186334 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:13.186681 master-0 kubenswrapper[7926]: I0216 21:08:13.186564 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:13.186681 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:13.186681 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:13.186681 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:13.187409 master-0 kubenswrapper[7926]: I0216 21:08:13.186721 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:14.186182 master-0 kubenswrapper[7926]: I0216 21:08:14.186091 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:14.186182 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:14.186182 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:14.186182 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:14.187388 master-0 kubenswrapper[7926]: I0216 21:08:14.186183 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:15.185151 master-0 kubenswrapper[7926]: I0216 21:08:15.185024 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:15.185151 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:15.185151 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:15.185151 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:15.185781 master-0 kubenswrapper[7926]: I0216 21:08:15.185158 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:16.186593 master-0 kubenswrapper[7926]: I0216 21:08:16.186510 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:16.186593 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:16.186593 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:16.186593 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:16.186593 master-0 kubenswrapper[7926]: I0216 21:08:16.186588 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:17.185094 master-0 kubenswrapper[7926]: I0216 21:08:17.185046 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:17.185094 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:17.185094 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:17.185094 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:17.185505 master-0 kubenswrapper[7926]: I0216 21:08:17.185472 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:18.184687 master-0 kubenswrapper[7926]: I0216 21:08:18.184583 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:18.184687 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:18.184687 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:18.184687 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:18.184687 master-0 kubenswrapper[7926]: I0216 21:08:18.184682 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:19.185992 master-0 kubenswrapper[7926]: I0216 21:08:19.185901 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:19.185992 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:19.185992 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:19.185992 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:19.186923 master-0 kubenswrapper[7926]: I0216 21:08:19.186002 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:19.785787 master-0 kubenswrapper[7926]: I0216 21:08:19.783344 7926 patch_prober.go:28] interesting pod/machine-config-daemon-jb6tl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 21:08:19.785787 master-0 kubenswrapper[7926]: I0216 21:08:19.783621 7926 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" podUID="88c9d2fb-763f-4405-8d1a-c39039b41d3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 21:08:20.187482 master-0 kubenswrapper[7926]: I0216 21:08:20.187218 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:20.187482 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:20.187482 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:20.187482 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:20.187482 master-0 kubenswrapper[7926]: I0216 21:08:20.187339 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:21.186041 master-0 kubenswrapper[7926]: I0216 21:08:21.185970 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:21.186041 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:21.186041 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:21.186041 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:21.186341 master-0 kubenswrapper[7926]: I0216 21:08:21.186047 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:22.186259 master-0 kubenswrapper[7926]: I0216 21:08:22.186143 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:22.186259 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:22.186259 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:22.186259 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:22.186259 master-0 kubenswrapper[7926]: I0216 21:08:22.186219 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:23.186090 master-0 kubenswrapper[7926]: I0216 21:08:23.186013 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:23.186090 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:23.186090 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:23.186090 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:23.186090 master-0 kubenswrapper[7926]: I0216 21:08:23.186083 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:24.184997 master-0 kubenswrapper[7926]: I0216 21:08:24.184878 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:24.184997 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:24.184997 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:24.184997 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:24.185329 master-0 kubenswrapper[7926]: I0216 21:08:24.185025 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:25.186043 master-0 kubenswrapper[7926]: I0216 21:08:25.185923 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:25.186043 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:25.186043 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:25.186043 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:25.186043 master-0 kubenswrapper[7926]: I0216 21:08:25.186020 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:26.185644 master-0 kubenswrapper[7926]: I0216 21:08:26.185522 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:26.185644 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:26.185644 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:26.185644 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:26.185644 master-0 kubenswrapper[7926]: I0216 21:08:26.185641 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:27.185162 master-0 kubenswrapper[7926]: I0216 21:08:27.185063 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:27.185162 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:27.185162 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:27.185162 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:27.185472 master-0 kubenswrapper[7926]: I0216 21:08:27.185192 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:28.185751 master-0 kubenswrapper[7926]: I0216 21:08:28.185634 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:08:28.185751 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:08:28.185751 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:08:28.185751 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:08:28.186499 master-0 kubenswrapper[7926]: I0216 21:08:28.185757 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:08:28.186499 master-0 kubenswrapper[7926]: I0216 21:08:28.185815 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:08:28.186499 master-0 kubenswrapper[7926]: I0216 21:08:28.186418 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"822e5a1c9a45bb991d7b382a67465c6dbc014dbe9cfde42d7e3116d883653d76"} pod="openshift-ingress/router-default-864ddd5f56-z4bnk" containerMessage="Container router failed startup probe, will be restarted" Feb 16 21:08:28.186499 master-0 kubenswrapper[7926]: I0216 21:08:28.186459 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" containerID="cri-o://822e5a1c9a45bb991d7b382a67465c6dbc014dbe9cfde42d7e3116d883653d76" gracePeriod=3600 Feb 16 21:08:32.397429 master-0 kubenswrapper[7926]: E0216 21:08:32.397251 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" podUID="a0b7a368-1408-4fc3-ae25-4613b74e7fca" Feb 16 21:08:32.635687 master-0 kubenswrapper[7926]: I0216 21:08:32.635582 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:08:37.307447 master-0 kubenswrapper[7926]: I0216 21:08:37.307207 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:08:37.307447 master-0 kubenswrapper[7926]: E0216 21:08:37.307431 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:08:37.307447 master-0 kubenswrapper[7926]: E0216 21:08:37.307538 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:10:39.307513125 +0000 UTC m=+810.942413435 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:09:14.973749 master-0 kubenswrapper[7926]: I0216 21:09:14.973640 7926 generic.go:334] "Generic (PLEG): container finished" podID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerID="822e5a1c9a45bb991d7b382a67465c6dbc014dbe9cfde42d7e3116d883653d76" exitCode=0 Feb 16 21:09:14.974356 master-0 kubenswrapper[7926]: I0216 21:09:14.973760 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerDied","Data":"822e5a1c9a45bb991d7b382a67465c6dbc014dbe9cfde42d7e3116d883653d76"} Feb 16 21:09:14.974356 master-0 kubenswrapper[7926]: I0216 21:09:14.973795 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerStarted","Data":"922b3b9a2ab72ca8bb93946974e3710fc89f41db642b5f99391c37114b12712f"} Feb 16 21:09:15.182552 master-0 kubenswrapper[7926]: I0216 21:09:15.182498 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:09:15.182895 master-0 kubenswrapper[7926]: I0216 21:09:15.182643 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:09:15.187218 master-0 kubenswrapper[7926]: I0216 21:09:15.187098 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:15.187218 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:15.187218 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:15.187218 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:15.187218 master-0 kubenswrapper[7926]: I0216 21:09:15.187159 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:16.185984 master-0 kubenswrapper[7926]: I0216 21:09:16.185931 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:16.185984 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:16.185984 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:16.185984 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:16.186622 master-0 kubenswrapper[7926]: I0216 21:09:16.186593 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:17.185347 master-0 kubenswrapper[7926]: I0216 21:09:17.185266 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:17.185347 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:17.185347 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:17.185347 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:17.185878 master-0 kubenswrapper[7926]: I0216 21:09:17.185356 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:18.184858 master-0 kubenswrapper[7926]: I0216 21:09:18.184747 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:18.184858 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:18.184858 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:18.184858 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:18.185514 master-0 kubenswrapper[7926]: I0216 21:09:18.184906 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:19.184528 master-0 kubenswrapper[7926]: I0216 21:09:19.184464 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:19.184528 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:19.184528 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:19.184528 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:19.184876 master-0 kubenswrapper[7926]: I0216 21:09:19.184552 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:20.184749 master-0 kubenswrapper[7926]: I0216 21:09:20.184622 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:20.184749 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:20.184749 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:20.184749 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:20.184749 master-0 kubenswrapper[7926]: I0216 21:09:20.184748 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:21.186444 master-0 kubenswrapper[7926]: I0216 21:09:21.186342 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:21.186444 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:21.186444 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:21.186444 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:21.186444 master-0 kubenswrapper[7926]: I0216 21:09:21.186441 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:22.185826 master-0 kubenswrapper[7926]: I0216 21:09:22.185731 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:22.185826 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:22.185826 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:22.185826 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:22.186391 master-0 kubenswrapper[7926]: I0216 21:09:22.185918 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:23.185540 master-0 kubenswrapper[7926]: I0216 21:09:23.185443 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:23.185540 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:23.185540 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:23.185540 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:23.185540 master-0 kubenswrapper[7926]: I0216 21:09:23.185519 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:24.187177 master-0 kubenswrapper[7926]: I0216 21:09:24.187051 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:24.187177 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:24.187177 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:24.187177 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:24.188526 master-0 kubenswrapper[7926]: I0216 21:09:24.187178 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:25.184286 master-0 kubenswrapper[7926]: I0216 21:09:25.184215 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:25.184286 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:25.184286 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:25.184286 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:25.184286 master-0 kubenswrapper[7926]: I0216 21:09:25.184267 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:26.185350 master-0 kubenswrapper[7926]: I0216 21:09:26.185269 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:26.185350 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:26.185350 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:26.185350 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:26.186025 master-0 kubenswrapper[7926]: I0216 21:09:26.185378 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:27.185941 master-0 kubenswrapper[7926]: I0216 21:09:27.185859 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:27.185941 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:27.185941 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:27.185941 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:27.185941 master-0 kubenswrapper[7926]: I0216 21:09:27.185937 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:28.186008 master-0 kubenswrapper[7926]: I0216 21:09:28.185929 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:28.186008 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:28.186008 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:28.186008 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:28.186804 master-0 kubenswrapper[7926]: I0216 21:09:28.186015 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:29.186167 master-0 kubenswrapper[7926]: I0216 21:09:29.186100 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:29.186167 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:29.186167 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:29.186167 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:29.186747 master-0 kubenswrapper[7926]: I0216 21:09:29.186179 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:30.185323 master-0 kubenswrapper[7926]: I0216 21:09:30.185234 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:30.185323 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:30.185323 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:30.185323 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:30.185726 master-0 kubenswrapper[7926]: I0216 21:09:30.185578 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:31.186372 master-0 kubenswrapper[7926]: I0216 21:09:31.186231 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:31.186372 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:31.186372 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:31.186372 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:31.187382 master-0 kubenswrapper[7926]: I0216 21:09:31.186398 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:32.184911 master-0 kubenswrapper[7926]: I0216 21:09:32.184850 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:32.184911 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:32.184911 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:32.184911 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:32.185624 master-0 kubenswrapper[7926]: I0216 21:09:32.185579 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:33.185471 master-0 kubenswrapper[7926]: I0216 21:09:33.185400 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:33.185471 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:33.185471 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:33.185471 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:33.186007 master-0 kubenswrapper[7926]: I0216 21:09:33.185483 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:34.185520 master-0 kubenswrapper[7926]: I0216 21:09:34.185416 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:34.185520 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:34.185520 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:34.185520 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:34.185520 master-0 kubenswrapper[7926]: I0216 21:09:34.185501 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:35.185239 master-0 kubenswrapper[7926]: I0216 21:09:35.185165 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:35.185239 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:35.185239 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:35.185239 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:35.185565 master-0 kubenswrapper[7926]: I0216 21:09:35.185243 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:36.185508 master-0 kubenswrapper[7926]: I0216 21:09:36.185416 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:36.185508 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:36.185508 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:36.185508 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:36.185508 master-0 kubenswrapper[7926]: I0216 21:09:36.185488 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:37.186352 master-0 kubenswrapper[7926]: I0216 21:09:37.186264 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:37.186352 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:37.186352 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:37.186352 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:37.187186 master-0 kubenswrapper[7926]: I0216 21:09:37.186359 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:38.185454 master-0 kubenswrapper[7926]: I0216 21:09:38.185322 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:38.185454 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:38.185454 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:38.185454 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:38.186378 master-0 kubenswrapper[7926]: I0216 21:09:38.185443 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:39.186393 master-0 kubenswrapper[7926]: I0216 21:09:39.186329 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:39.186393 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:39.186393 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:39.186393 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:39.187004 master-0 kubenswrapper[7926]: I0216 21:09:39.186416 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:40.186250 master-0 kubenswrapper[7926]: I0216 21:09:40.186103 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:40.186250 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:40.186250 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:40.186250 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:40.187561 master-0 kubenswrapper[7926]: I0216 21:09:40.186246 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:41.184894 master-0 kubenswrapper[7926]: I0216 21:09:41.184797 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:41.184894 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:41.184894 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:41.184894 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:41.184894 master-0 kubenswrapper[7926]: I0216 21:09:41.184891 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:42.150264 master-0 kubenswrapper[7926]: I0216 21:09:42.150178 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/3.log" Feb 16 21:09:42.151391 master-0 kubenswrapper[7926]: I0216 21:09:42.151333 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/2.log" Feb 16 21:09:42.152159 master-0 kubenswrapper[7926]: I0216 21:09:42.152113 7926 generic.go:334] "Generic (PLEG): container finished" podID="cef33294-81fb-41a2-811d-2565f94514d1" containerID="653d95653081a7f3f8351ba7eaf8e2a8cf9f5394f19ac7bd13b4a971322691eb" exitCode=1 Feb 16 21:09:42.152338 master-0 kubenswrapper[7926]: I0216 21:09:42.152161 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerDied","Data":"653d95653081a7f3f8351ba7eaf8e2a8cf9f5394f19ac7bd13b4a971322691eb"} Feb 16 21:09:42.152338 master-0 kubenswrapper[7926]: I0216 21:09:42.152201 7926 scope.go:117] "RemoveContainer" containerID="50720d9ad3b3ea70d85acc6454761164cbe913fb0f9ca263fc8b50f0bd5f848c" Feb 16 21:09:42.153027 master-0 kubenswrapper[7926]: I0216 21:09:42.152986 7926 scope.go:117] "RemoveContainer" containerID="653d95653081a7f3f8351ba7eaf8e2a8cf9f5394f19ac7bd13b4a971322691eb" Feb 16 21:09:42.154711 master-0 kubenswrapper[7926]: E0216 21:09:42.153848 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:09:42.186638 master-0 kubenswrapper[7926]: I0216 21:09:42.186583 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:42.186638 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:42.186638 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:42.186638 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:42.187450 master-0 kubenswrapper[7926]: I0216 21:09:42.186722 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:43.162038 master-0 kubenswrapper[7926]: I0216 21:09:43.161939 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/3.log" Feb 16 21:09:43.186109 master-0 kubenswrapper[7926]: I0216 21:09:43.186003 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:43.186109 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:43.186109 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:43.186109 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:43.186109 master-0 kubenswrapper[7926]: I0216 21:09:43.186094 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:44.184842 master-0 kubenswrapper[7926]: I0216 21:09:44.184700 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:44.184842 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:44.184842 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:44.184842 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:44.184842 master-0 kubenswrapper[7926]: I0216 21:09:44.184795 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:45.184690 master-0 kubenswrapper[7926]: I0216 21:09:45.184586 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:45.184690 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:45.184690 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:45.184690 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:45.185721 master-0 kubenswrapper[7926]: I0216 21:09:45.184697 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:46.186104 master-0 kubenswrapper[7926]: I0216 21:09:46.186006 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:46.186104 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:46.186104 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:46.186104 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:46.186699 master-0 kubenswrapper[7926]: I0216 21:09:46.186122 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:47.185211 master-0 kubenswrapper[7926]: I0216 21:09:47.185100 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:47.185211 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:47.185211 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:47.185211 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:47.185211 master-0 kubenswrapper[7926]: I0216 21:09:47.185195 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:48.184724 master-0 kubenswrapper[7926]: I0216 21:09:48.184640 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:48.184724 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:48.184724 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:48.184724 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:48.185317 master-0 kubenswrapper[7926]: I0216 21:09:48.184749 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:49.184900 master-0 kubenswrapper[7926]: I0216 21:09:49.184827 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:49.184900 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:49.184900 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:49.184900 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:49.184900 master-0 kubenswrapper[7926]: I0216 21:09:49.184889 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:50.184086 master-0 kubenswrapper[7926]: I0216 21:09:50.183974 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:50.184086 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:50.184086 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:50.184086 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:50.184585 master-0 kubenswrapper[7926]: I0216 21:09:50.184104 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:51.184976 master-0 kubenswrapper[7926]: I0216 21:09:51.184909 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:51.184976 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:51.184976 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:51.184976 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:51.184976 master-0 kubenswrapper[7926]: I0216 21:09:51.184969 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:52.184867 master-0 kubenswrapper[7926]: I0216 21:09:52.184795 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:52.184867 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:52.184867 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:52.184867 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:52.184867 master-0 kubenswrapper[7926]: I0216 21:09:52.184864 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:53.184719 master-0 kubenswrapper[7926]: I0216 21:09:53.184601 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:53.184719 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:53.184719 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:53.184719 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:53.185180 master-0 kubenswrapper[7926]: I0216 21:09:53.184743 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:54.185057 master-0 kubenswrapper[7926]: I0216 21:09:54.184989 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:54.185057 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:54.185057 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:54.185057 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:54.185057 master-0 kubenswrapper[7926]: I0216 21:09:54.185046 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:55.185094 master-0 kubenswrapper[7926]: I0216 21:09:55.185032 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:55.185094 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:55.185094 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:55.185094 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:55.186169 master-0 kubenswrapper[7926]: I0216 21:09:55.185844 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:55.740012 master-0 kubenswrapper[7926]: I0216 21:09:55.739831 7926 scope.go:117] "RemoveContainer" containerID="653d95653081a7f3f8351ba7eaf8e2a8cf9f5394f19ac7bd13b4a971322691eb" Feb 16 21:09:55.740885 master-0 kubenswrapper[7926]: E0216 21:09:55.740750 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:09:56.185226 master-0 kubenswrapper[7926]: I0216 21:09:56.185091 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:56.185226 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:56.185226 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:56.185226 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:56.185226 master-0 kubenswrapper[7926]: I0216 21:09:56.185225 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:57.185529 master-0 kubenswrapper[7926]: I0216 21:09:57.185459 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:57.185529 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:57.185529 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:57.185529 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:57.185529 master-0 kubenswrapper[7926]: I0216 21:09:57.185531 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:58.184591 master-0 kubenswrapper[7926]: I0216 21:09:58.184471 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:58.184591 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:58.184591 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:58.184591 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:58.184591 master-0 kubenswrapper[7926]: I0216 21:09:58.184552 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:09:59.185328 master-0 kubenswrapper[7926]: I0216 21:09:59.185255 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:09:59.185328 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:09:59.185328 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:09:59.185328 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:09:59.186335 master-0 kubenswrapper[7926]: I0216 21:09:59.185337 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:00.185584 master-0 kubenswrapper[7926]: I0216 21:10:00.185533 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:00.185584 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:00.185584 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:00.185584 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:00.186259 master-0 kubenswrapper[7926]: I0216 21:10:00.185603 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:01.185040 master-0 kubenswrapper[7926]: I0216 21:10:01.184944 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:01.185040 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:01.185040 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:01.185040 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:01.186058 master-0 kubenswrapper[7926]: I0216 21:10:01.185062 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:02.185071 master-0 kubenswrapper[7926]: I0216 21:10:02.184989 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:02.185071 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:02.185071 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:02.185071 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:02.185380 master-0 kubenswrapper[7926]: I0216 21:10:02.185083 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:03.184816 master-0 kubenswrapper[7926]: I0216 21:10:03.184698 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:03.184816 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:03.184816 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:03.184816 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:03.185738 master-0 kubenswrapper[7926]: I0216 21:10:03.184833 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:04.185624 master-0 kubenswrapper[7926]: I0216 21:10:04.185533 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:04.185624 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:04.185624 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:04.185624 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:04.186551 master-0 kubenswrapper[7926]: I0216 21:10:04.185627 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:05.184923 master-0 kubenswrapper[7926]: I0216 21:10:05.184854 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:05.184923 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:05.184923 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:05.184923 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:05.185253 master-0 kubenswrapper[7926]: I0216 21:10:05.184937 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:06.184469 master-0 kubenswrapper[7926]: I0216 21:10:06.184428 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:06.184469 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:06.184469 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:06.184469 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:06.185126 master-0 kubenswrapper[7926]: I0216 21:10:06.185093 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:07.185765 master-0 kubenswrapper[7926]: I0216 21:10:07.185444 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:07.185765 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:07.185765 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:07.185765 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:07.185765 master-0 kubenswrapper[7926]: I0216 21:10:07.185560 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:08.185764 master-0 kubenswrapper[7926]: I0216 21:10:08.185669 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:08.185764 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:08.185764 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:08.185764 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:08.185764 master-0 kubenswrapper[7926]: I0216 21:10:08.185738 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:09.185493 master-0 kubenswrapper[7926]: I0216 21:10:09.185340 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:09.185493 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:09.185493 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:09.185493 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:09.185493 master-0 kubenswrapper[7926]: I0216 21:10:09.185441 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:09.739439 master-0 kubenswrapper[7926]: I0216 21:10:09.739365 7926 scope.go:117] "RemoveContainer" containerID="653d95653081a7f3f8351ba7eaf8e2a8cf9f5394f19ac7bd13b4a971322691eb" Feb 16 21:10:09.739881 master-0 kubenswrapper[7926]: E0216 21:10:09.739832 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:10:10.185124 master-0 kubenswrapper[7926]: I0216 21:10:10.184953 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:10.185124 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:10.185124 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:10.185124 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:10.185124 master-0 kubenswrapper[7926]: I0216 21:10:10.185065 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:10.664824 master-0 kubenswrapper[7926]: I0216 21:10:10.664758 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-l44qd"] Feb 16 21:10:10.665555 master-0 kubenswrapper[7926]: I0216 21:10:10.665527 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:10.667894 master-0 kubenswrapper[7926]: I0216 21:10:10.667854 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 21:10:10.668341 master-0 kubenswrapper[7926]: I0216 21:10:10.668272 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-gpdzh" Feb 16 21:10:10.668608 master-0 kubenswrapper[7926]: I0216 21:10:10.668565 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 21:10:10.668733 master-0 kubenswrapper[7926]: I0216 21:10:10.668712 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 21:10:10.680769 master-0 kubenswrapper[7926]: I0216 21:10:10.680700 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l44qd"] Feb 16 21:10:10.800577 master-0 kubenswrapper[7926]: I0216 21:10:10.800480 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vddxb\" (UniqueName: \"kubernetes.io/projected/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-kube-api-access-vddxb\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:10.801042 master-0 kubenswrapper[7926]: I0216 21:10:10.800960 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:10.902393 master-0 kubenswrapper[7926]: I0216 21:10:10.902290 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:10.902393 master-0 kubenswrapper[7926]: I0216 21:10:10.902385 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vddxb\" (UniqueName: \"kubernetes.io/projected/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-kube-api-access-vddxb\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:10.902847 master-0 kubenswrapper[7926]: E0216 21:10:10.902511 7926 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 21:10:10.902847 master-0 kubenswrapper[7926]: E0216 21:10:10.902680 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert podName:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b nodeName:}" failed. No retries permitted until 2026-02-16 21:10:11.402607594 +0000 UTC m=+783.037507934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert") pod "ingress-canary-l44qd" (UID: "0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b") : secret "canary-serving-cert" not found Feb 16 21:10:10.933847 master-0 kubenswrapper[7926]: I0216 21:10:10.933635 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vddxb\" (UniqueName: \"kubernetes.io/projected/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-kube-api-access-vddxb\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:11.186256 master-0 kubenswrapper[7926]: I0216 21:10:11.186068 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:11.186256 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:11.186256 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:11.186256 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:11.186256 master-0 kubenswrapper[7926]: I0216 21:10:11.186157 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:11.409436 master-0 kubenswrapper[7926]: I0216 21:10:11.409368 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:11.409762 master-0 kubenswrapper[7926]: E0216 21:10:11.409591 7926 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 21:10:11.409762 master-0 kubenswrapper[7926]: E0216 21:10:11.409751 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert podName:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b nodeName:}" failed. No retries permitted until 2026-02-16 21:10:12.409717592 +0000 UTC m=+784.044617922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert") pod "ingress-canary-l44qd" (UID: "0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b") : secret "canary-serving-cert" not found Feb 16 21:10:12.186550 master-0 kubenswrapper[7926]: I0216 21:10:12.186473 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:12.186550 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:12.186550 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:12.186550 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:12.187570 master-0 kubenswrapper[7926]: I0216 21:10:12.186583 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:12.425356 master-0 kubenswrapper[7926]: I0216 21:10:12.425263 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:12.425644 master-0 kubenswrapper[7926]: E0216 21:10:12.425530 7926 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 21:10:12.425644 master-0 kubenswrapper[7926]: E0216 21:10:12.425632 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert podName:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b nodeName:}" failed. No retries permitted until 2026-02-16 21:10:14.425605064 +0000 UTC m=+786.060505404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert") pod "ingress-canary-l44qd" (UID: "0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b") : secret "canary-serving-cert" not found Feb 16 21:10:13.186240 master-0 kubenswrapper[7926]: I0216 21:10:13.186156 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:13.186240 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:13.186240 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:13.186240 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:13.186592 master-0 kubenswrapper[7926]: I0216 21:10:13.186268 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:14.185028 master-0 kubenswrapper[7926]: I0216 21:10:14.184963 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:14.185028 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:14.185028 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:14.185028 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:14.185353 master-0 kubenswrapper[7926]: I0216 21:10:14.185056 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:14.455690 master-0 kubenswrapper[7926]: I0216 21:10:14.455514 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:14.456243 master-0 kubenswrapper[7926]: E0216 21:10:14.455749 7926 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 21:10:14.456243 master-0 kubenswrapper[7926]: E0216 21:10:14.455840 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert podName:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b nodeName:}" failed. No retries permitted until 2026-02-16 21:10:18.455820006 +0000 UTC m=+790.090720316 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert") pod "ingress-canary-l44qd" (UID: "0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b") : secret "canary-serving-cert" not found Feb 16 21:10:15.184810 master-0 kubenswrapper[7926]: I0216 21:10:15.184749 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:15.184810 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:15.184810 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:15.184810 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:15.185075 master-0 kubenswrapper[7926]: I0216 21:10:15.184840 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:16.185018 master-0 kubenswrapper[7926]: I0216 21:10:16.184914 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:16.185018 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:16.185018 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:16.185018 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:16.185018 master-0 kubenswrapper[7926]: I0216 21:10:16.184998 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:17.186225 master-0 kubenswrapper[7926]: I0216 21:10:17.186104 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:17.186225 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:17.186225 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:17.186225 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:17.187577 master-0 kubenswrapper[7926]: I0216 21:10:17.186248 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:18.186006 master-0 kubenswrapper[7926]: I0216 21:10:18.185900 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:18.186006 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:18.186006 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:18.186006 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:18.187292 master-0 kubenswrapper[7926]: I0216 21:10:18.186026 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:18.513436 master-0 kubenswrapper[7926]: I0216 21:10:18.513337 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:18.513923 master-0 kubenswrapper[7926]: E0216 21:10:18.513605 7926 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 21:10:18.513923 master-0 kubenswrapper[7926]: E0216 21:10:18.513752 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert podName:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b nodeName:}" failed. No retries permitted until 2026-02-16 21:10:26.513720709 +0000 UTC m=+798.148621199 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert") pod "ingress-canary-l44qd" (UID: "0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b") : secret "canary-serving-cert" not found Feb 16 21:10:19.186422 master-0 kubenswrapper[7926]: I0216 21:10:19.186318 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:19.186422 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:19.186422 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:19.186422 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:19.186422 master-0 kubenswrapper[7926]: I0216 21:10:19.186415 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:20.184376 master-0 kubenswrapper[7926]: I0216 21:10:20.184310 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:20.184376 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:20.184376 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:20.184376 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:20.184731 master-0 kubenswrapper[7926]: I0216 21:10:20.184377 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:21.186016 master-0 kubenswrapper[7926]: I0216 21:10:21.185936 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:21.186016 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:21.186016 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:21.186016 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:21.186574 master-0 kubenswrapper[7926]: I0216 21:10:21.186024 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:22.186020 master-0 kubenswrapper[7926]: I0216 21:10:22.185932 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:22.186020 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:22.186020 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:22.186020 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:22.187221 master-0 kubenswrapper[7926]: I0216 21:10:22.186038 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:23.185094 master-0 kubenswrapper[7926]: I0216 21:10:23.185023 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:23.185094 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:23.185094 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:23.185094 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:23.185422 master-0 kubenswrapper[7926]: I0216 21:10:23.185106 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:24.185812 master-0 kubenswrapper[7926]: I0216 21:10:24.185719 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:24.185812 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:24.185812 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:24.185812 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:24.187069 master-0 kubenswrapper[7926]: I0216 21:10:24.185838 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:24.739263 master-0 kubenswrapper[7926]: I0216 21:10:24.739198 7926 scope.go:117] "RemoveContainer" containerID="653d95653081a7f3f8351ba7eaf8e2a8cf9f5394f19ac7bd13b4a971322691eb" Feb 16 21:10:25.187015 master-0 kubenswrapper[7926]: I0216 21:10:25.186914 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:25.187015 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:25.187015 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:25.187015 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:25.187015 master-0 kubenswrapper[7926]: I0216 21:10:25.186998 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:25.437251 master-0 kubenswrapper[7926]: I0216 21:10:25.437085 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/3.log" Feb 16 21:10:25.437876 master-0 kubenswrapper[7926]: I0216 21:10:25.437803 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"4007378c35279e107179280f5b478a33e451c6d5ec64c7c97a91228d94179cd2"} Feb 16 21:10:26.185695 master-0 kubenswrapper[7926]: I0216 21:10:26.185322 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:26.185695 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:26.185695 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:26.185695 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:26.185695 master-0 kubenswrapper[7926]: I0216 21:10:26.185387 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:26.550227 master-0 kubenswrapper[7926]: I0216 21:10:26.550111 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:26.551394 master-0 kubenswrapper[7926]: E0216 21:10:26.550366 7926 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 21:10:26.551394 master-0 kubenswrapper[7926]: E0216 21:10:26.550487 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert podName:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b nodeName:}" failed. No retries permitted until 2026-02-16 21:10:42.550452523 +0000 UTC m=+814.185352863 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert") pod "ingress-canary-l44qd" (UID: "0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b") : secret "canary-serving-cert" not found Feb 16 21:10:27.186097 master-0 kubenswrapper[7926]: I0216 21:10:27.186034 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:27.186097 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:27.186097 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:27.186097 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:27.186379 master-0 kubenswrapper[7926]: I0216 21:10:27.186105 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:28.185522 master-0 kubenswrapper[7926]: I0216 21:10:28.185408 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:28.185522 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:28.185522 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:28.185522 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:28.186539 master-0 kubenswrapper[7926]: I0216 21:10:28.185553 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:29.184061 master-0 kubenswrapper[7926]: I0216 21:10:29.183994 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:29.184061 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:29.184061 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:29.184061 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:29.184370 master-0 kubenswrapper[7926]: I0216 21:10:29.184060 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:30.185182 master-0 kubenswrapper[7926]: I0216 21:10:30.185114 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:30.185182 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:30.185182 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:30.185182 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:30.186177 master-0 kubenswrapper[7926]: I0216 21:10:30.185183 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:31.185711 master-0 kubenswrapper[7926]: I0216 21:10:31.185604 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:31.185711 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:31.185711 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:31.185711 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:31.186694 master-0 kubenswrapper[7926]: I0216 21:10:31.185749 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:32.184924 master-0 kubenswrapper[7926]: I0216 21:10:32.184821 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:32.184924 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:32.184924 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:32.184924 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:32.185266 master-0 kubenswrapper[7926]: I0216 21:10:32.184966 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:33.184296 master-0 kubenswrapper[7926]: I0216 21:10:33.184246 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:33.184296 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:33.184296 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:33.184296 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:33.184854 master-0 kubenswrapper[7926]: I0216 21:10:33.184309 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:34.184740 master-0 kubenswrapper[7926]: I0216 21:10:34.184693 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:34.184740 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:34.184740 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:34.184740 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:34.185298 master-0 kubenswrapper[7926]: I0216 21:10:34.184752 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:35.185634 master-0 kubenswrapper[7926]: I0216 21:10:35.185454 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:35.185634 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:35.185634 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:35.185634 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:35.185634 master-0 kubenswrapper[7926]: I0216 21:10:35.185591 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:35.648472 master-0 kubenswrapper[7926]: E0216 21:10:35.648366 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" podUID="a0b7a368-1408-4fc3-ae25-4613b74e7fca" Feb 16 21:10:36.032365 master-0 kubenswrapper[7926]: I0216 21:10:36.032298 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 16 21:10:36.033687 master-0 kubenswrapper[7926]: I0216 21:10:36.033643 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 21:10:36.037196 master-0 kubenswrapper[7926]: I0216 21:10:36.037110 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 16 21:10:36.043989 master-0 kubenswrapper[7926]: I0216 21:10:36.043905 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-c5j6b" Feb 16 21:10:36.087977 master-0 kubenswrapper[7926]: I0216 21:10:36.053576 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 16 21:10:36.176012 master-0 kubenswrapper[7926]: I0216 21:10:36.175950 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " pod="openshift-etcd/installer-2-master-0" Feb 16 21:10:36.176012 master-0 kubenswrapper[7926]: I0216 21:10:36.176004 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-var-lock\") pod \"installer-2-master-0\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " pod="openshift-etcd/installer-2-master-0" Feb 16 21:10:36.176374 master-0 kubenswrapper[7926]: I0216 21:10:36.176043 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " pod="openshift-etcd/installer-2-master-0" Feb 16 21:10:36.185640 master-0 kubenswrapper[7926]: I0216 21:10:36.185579 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:36.185640 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:36.185640 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:36.185640 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:36.186128 master-0 kubenswrapper[7926]: I0216 21:10:36.185694 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:36.278134 master-0 kubenswrapper[7926]: I0216 21:10:36.278069 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " pod="openshift-etcd/installer-2-master-0" Feb 16 21:10:36.278345 master-0 kubenswrapper[7926]: I0216 21:10:36.278185 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " pod="openshift-etcd/installer-2-master-0" Feb 16 21:10:36.278345 master-0 kubenswrapper[7926]: I0216 21:10:36.278272 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-var-lock\") pod \"installer-2-master-0\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " pod="openshift-etcd/installer-2-master-0" Feb 16 21:10:36.278345 master-0 kubenswrapper[7926]: I0216 21:10:36.278331 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " pod="openshift-etcd/installer-2-master-0" Feb 16 21:10:36.278536 master-0 kubenswrapper[7926]: I0216 21:10:36.278502 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-var-lock\") pod \"installer-2-master-0\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " pod="openshift-etcd/installer-2-master-0" Feb 16 21:10:36.295935 master-0 kubenswrapper[7926]: I0216 21:10:36.295830 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " pod="openshift-etcd/installer-2-master-0" Feb 16 21:10:36.408255 master-0 kubenswrapper[7926]: I0216 21:10:36.408188 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 21:10:36.505298 master-0 kubenswrapper[7926]: I0216 21:10:36.505270 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:10:36.833133 master-0 kubenswrapper[7926]: I0216 21:10:36.833045 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 16 21:10:37.184972 master-0 kubenswrapper[7926]: I0216 21:10:37.184912 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:37.184972 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:37.184972 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:37.184972 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:37.185311 master-0 kubenswrapper[7926]: I0216 21:10:37.184977 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:37.512625 master-0 kubenswrapper[7926]: I0216 21:10:37.512557 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"1677883f-bae2-4b6e-9dfe-683a6d26f2c5","Type":"ContainerStarted","Data":"b251b8636a6a11ccf532a9af9a8852c95e1a7cdd48031754c8a88d40620a2450"} Feb 16 21:10:37.512625 master-0 kubenswrapper[7926]: I0216 21:10:37.512619 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"1677883f-bae2-4b6e-9dfe-683a6d26f2c5","Type":"ContainerStarted","Data":"7f9adda37238ede86f88cbac2c999b2aa463809256c6a93ac9e769608706a215"} Feb 16 21:10:37.529872 master-0 kubenswrapper[7926]: I0216 21:10:37.529791 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=1.529777158 podStartE2EDuration="1.529777158s" podCreationTimestamp="2026-02-16 21:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:10:37.527534496 +0000 UTC m=+809.162434796" watchObservedRunningTime="2026-02-16 21:10:37.529777158 +0000 UTC m=+809.164677458" Feb 16 21:10:38.185516 master-0 kubenswrapper[7926]: I0216 21:10:38.185424 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:38.185516 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:38.185516 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:38.185516 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:38.185929 master-0 kubenswrapper[7926]: I0216 21:10:38.185539 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:39.184866 master-0 kubenswrapper[7926]: I0216 21:10:39.184805 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:39.184866 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:39.184866 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:39.184866 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:39.185376 master-0 kubenswrapper[7926]: I0216 21:10:39.184899 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:39.317235 master-0 kubenswrapper[7926]: I0216 21:10:39.317175 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:10:39.317690 master-0 kubenswrapper[7926]: E0216 21:10:39.317355 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:10:39.317745 master-0 kubenswrapper[7926]: E0216 21:10:39.317695 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:12:41.317626533 +0000 UTC m=+932.952526873 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:10:40.187050 master-0 kubenswrapper[7926]: I0216 21:10:40.186969 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:40.187050 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:40.187050 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:40.187050 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:40.187050 master-0 kubenswrapper[7926]: I0216 21:10:40.187036 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:41.184791 master-0 kubenswrapper[7926]: I0216 21:10:41.184727 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:41.184791 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:41.184791 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:41.184791 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:41.185165 master-0 kubenswrapper[7926]: I0216 21:10:41.184798 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:42.186025 master-0 kubenswrapper[7926]: I0216 21:10:42.185946 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:42.186025 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:42.186025 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:42.186025 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:42.187039 master-0 kubenswrapper[7926]: I0216 21:10:42.186031 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:42.561533 master-0 kubenswrapper[7926]: I0216 21:10:42.561037 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:10:42.561533 master-0 kubenswrapper[7926]: E0216 21:10:42.561249 7926 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 21:10:42.561533 master-0 kubenswrapper[7926]: E0216 21:10:42.561299 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert podName:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b nodeName:}" failed. No retries permitted until 2026-02-16 21:11:14.561284594 +0000 UTC m=+846.196184884 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert") pod "ingress-canary-l44qd" (UID: "0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b") : secret "canary-serving-cert" not found Feb 16 21:10:43.185176 master-0 kubenswrapper[7926]: I0216 21:10:43.185105 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:43.185176 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:43.185176 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:43.185176 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:43.185461 master-0 kubenswrapper[7926]: I0216 21:10:43.185193 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:44.184246 master-0 kubenswrapper[7926]: I0216 21:10:44.184201 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:44.184246 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:44.184246 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:44.184246 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:44.184803 master-0 kubenswrapper[7926]: I0216 21:10:44.184257 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:45.184225 master-0 kubenswrapper[7926]: I0216 21:10:45.184144 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:45.184225 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:45.184225 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:45.184225 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:45.184788 master-0 kubenswrapper[7926]: I0216 21:10:45.184258 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:46.130116 master-0 kubenswrapper[7926]: I0216 21:10:46.130008 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 16 21:10:46.131286 master-0 kubenswrapper[7926]: I0216 21:10:46.131200 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:10:46.133562 master-0 kubenswrapper[7926]: I0216 21:10:46.133519 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-sdjl5" Feb 16 21:10:46.134900 master-0 kubenswrapper[7926]: I0216 21:10:46.134855 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 16 21:10:46.141731 master-0 kubenswrapper[7926]: I0216 21:10:46.141658 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 16 21:10:46.185323 master-0 kubenswrapper[7926]: I0216 21:10:46.185244 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:46.185323 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:46.185323 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:46.185323 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:46.186169 master-0 kubenswrapper[7926]: I0216 21:10:46.185344 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:46.211710 master-0 kubenswrapper[7926]: I0216 21:10:46.211616 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fc3abc9-3012-43bd-af84-fc65baf82801-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:10:46.212122 master-0 kubenswrapper[7926]: I0216 21:10:46.211726 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:10:46.212122 master-0 kubenswrapper[7926]: I0216 21:10:46.211915 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-var-lock\") pod \"installer-4-master-0\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:10:46.312774 master-0 kubenswrapper[7926]: I0216 21:10:46.312688 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fc3abc9-3012-43bd-af84-fc65baf82801-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:10:46.313165 master-0 kubenswrapper[7926]: I0216 21:10:46.312980 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:10:46.313235 master-0 kubenswrapper[7926]: I0216 21:10:46.313156 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:10:46.315705 master-0 kubenswrapper[7926]: I0216 21:10:46.313312 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-var-lock\") pod \"installer-4-master-0\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:10:46.315705 master-0 kubenswrapper[7926]: I0216 21:10:46.313455 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-var-lock\") pod \"installer-4-master-0\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:10:46.331243 master-0 kubenswrapper[7926]: I0216 21:10:46.331173 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fc3abc9-3012-43bd-af84-fc65baf82801-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:10:46.463517 master-0 kubenswrapper[7926]: I0216 21:10:46.463285 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:10:46.909410 master-0 kubenswrapper[7926]: I0216 21:10:46.906918 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 16 21:10:47.185608 master-0 kubenswrapper[7926]: I0216 21:10:47.185268 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:47.185608 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:47.185608 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:47.185608 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:47.185608 master-0 kubenswrapper[7926]: I0216 21:10:47.185366 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:47.590479 master-0 kubenswrapper[7926]: I0216 21:10:47.590404 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7fc3abc9-3012-43bd-af84-fc65baf82801","Type":"ContainerStarted","Data":"7705ab1783cfe260a257da3d99d4c43b8aa6602286bbd8b5854c2a525ae4f204"} Feb 16 21:10:47.590479 master-0 kubenswrapper[7926]: I0216 21:10:47.590448 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7fc3abc9-3012-43bd-af84-fc65baf82801","Type":"ContainerStarted","Data":"e18212da3ba9255cc13862af9e868f85f8caf8c7478800353ac7a39fbc390fa8"} Feb 16 21:10:47.611899 master-0 kubenswrapper[7926]: I0216 21:10:47.611741 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=1.611724525 podStartE2EDuration="1.611724525s" podCreationTimestamp="2026-02-16 21:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:10:47.607738034 +0000 UTC m=+819.242638364" watchObservedRunningTime="2026-02-16 21:10:47.611724525 +0000 UTC m=+819.246624825" Feb 16 21:10:48.186669 master-0 kubenswrapper[7926]: I0216 21:10:48.186540 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:48.186669 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:48.186669 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:48.186669 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:48.187709 master-0 kubenswrapper[7926]: I0216 21:10:48.186684 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:49.185984 master-0 kubenswrapper[7926]: I0216 21:10:49.185891 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:49.185984 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:49.185984 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:49.185984 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:49.185984 master-0 kubenswrapper[7926]: I0216 21:10:49.185994 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:50.185016 master-0 kubenswrapper[7926]: I0216 21:10:50.184920 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:50.185016 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:50.185016 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:50.185016 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:50.185016 master-0 kubenswrapper[7926]: I0216 21:10:50.184989 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:51.184335 master-0 kubenswrapper[7926]: I0216 21:10:51.184287 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:51.184335 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:51.184335 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:51.184335 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:51.184889 master-0 kubenswrapper[7926]: I0216 21:10:51.184852 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:52.185546 master-0 kubenswrapper[7926]: I0216 21:10:52.185466 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:52.185546 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:52.185546 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:52.185546 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:52.185546 master-0 kubenswrapper[7926]: I0216 21:10:52.185539 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:53.184997 master-0 kubenswrapper[7926]: I0216 21:10:53.184919 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:53.184997 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:53.184997 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:53.184997 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:53.185369 master-0 kubenswrapper[7926]: I0216 21:10:53.185011 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:54.185389 master-0 kubenswrapper[7926]: I0216 21:10:54.185302 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:54.185389 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:54.185389 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:54.185389 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:54.185389 master-0 kubenswrapper[7926]: I0216 21:10:54.185371 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:55.184991 master-0 kubenswrapper[7926]: I0216 21:10:55.184922 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:55.184991 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:55.184991 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:55.184991 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:55.185268 master-0 kubenswrapper[7926]: I0216 21:10:55.185019 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:56.184807 master-0 kubenswrapper[7926]: I0216 21:10:56.184732 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:56.184807 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:56.184807 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:56.184807 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:56.184807 master-0 kubenswrapper[7926]: I0216 21:10:56.184807 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:56.285072 master-0 kubenswrapper[7926]: I0216 21:10:56.284973 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q"] Feb 16 21:10:56.285520 master-0 kubenswrapper[7926]: I0216 21:10:56.285464 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerName="kube-rbac-proxy" containerID="cri-o://94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5" gracePeriod=30 Feb 16 21:10:56.285714 master-0 kubenswrapper[7926]: I0216 21:10:56.285616 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerName="machine-approver-controller" containerID="cri-o://d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482" gracePeriod=30 Feb 16 21:10:56.441272 master-0 kubenswrapper[7926]: I0216 21:10:56.441165 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-6c46d95f74-2nz2q_c62bb2b4-1469-4e0d-810f-cd6e21ee908a/machine-approver-controller/0.log" Feb 16 21:10:56.441729 master-0 kubenswrapper[7926]: I0216 21:10:56.441641 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 21:10:56.493126 master-0 kubenswrapper[7926]: I0216 21:10:56.493080 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4"] Feb 16 21:10:56.493350 master-0 kubenswrapper[7926]: E0216 21:10:56.493298 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerName="machine-approver-controller" Feb 16 21:10:56.493350 master-0 kubenswrapper[7926]: I0216 21:10:56.493311 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerName="machine-approver-controller" Feb 16 21:10:56.493350 master-0 kubenswrapper[7926]: E0216 21:10:56.493327 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerName="kube-rbac-proxy" Feb 16 21:10:56.493350 master-0 kubenswrapper[7926]: I0216 21:10:56.493336 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerName="kube-rbac-proxy" Feb 16 21:10:56.493503 master-0 kubenswrapper[7926]: E0216 21:10:56.493353 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerName="machine-approver-controller" Feb 16 21:10:56.493503 master-0 kubenswrapper[7926]: I0216 21:10:56.493360 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerName="machine-approver-controller" Feb 16 21:10:56.493569 master-0 kubenswrapper[7926]: I0216 21:10:56.493506 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerName="kube-rbac-proxy" Feb 16 21:10:56.493569 master-0 kubenswrapper[7926]: I0216 21:10:56.493535 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerName="machine-approver-controller" Feb 16 21:10:56.493911 master-0 kubenswrapper[7926]: I0216 21:10:56.493762 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerName="machine-approver-controller" Feb 16 21:10:56.494185 master-0 kubenswrapper[7926]: I0216 21:10:56.494169 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.497136 master-0 kubenswrapper[7926]: I0216 21:10:56.497077 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vqmt8" Feb 16 21:10:56.545892 master-0 kubenswrapper[7926]: I0216 21:10:56.545845 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-config\") pod \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " Feb 16 21:10:56.545892 master-0 kubenswrapper[7926]: I0216 21:10:56.545903 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qqkf\" (UniqueName: \"kubernetes.io/projected/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-kube-api-access-4qqkf\") pod \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " Feb 16 21:10:56.546124 master-0 kubenswrapper[7926]: I0216 21:10:56.545997 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-machine-approver-tls\") pod \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " Feb 16 21:10:56.546124 master-0 kubenswrapper[7926]: I0216 21:10:56.546025 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-auth-proxy-config\") pod \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\" (UID: \"c62bb2b4-1469-4e0d-810f-cd6e21ee908a\") " Feb 16 21:10:56.546344 master-0 kubenswrapper[7926]: I0216 21:10:56.546295 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-config" (OuterVolumeSpecName: "config") pod "c62bb2b4-1469-4e0d-810f-cd6e21ee908a" (UID: "c62bb2b4-1469-4e0d-810f-cd6e21ee908a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:10:56.546503 master-0 kubenswrapper[7926]: I0216 21:10:56.546477 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "c62bb2b4-1469-4e0d-810f-cd6e21ee908a" (UID: "c62bb2b4-1469-4e0d-810f-cd6e21ee908a"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:10:56.546605 master-0 kubenswrapper[7926]: I0216 21:10:56.546528 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-auth-proxy-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.546713 master-0 kubenswrapper[7926]: I0216 21:10:56.546684 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.546803 master-0 kubenswrapper[7926]: I0216 21:10:56.546783 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trcfg\" (UniqueName: \"kubernetes.io/projected/065fcd43-1572-4152-b77b-a6b7ab52a081-kube-api-access-trcfg\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.546906 master-0 kubenswrapper[7926]: I0216 21:10:56.546869 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/065fcd43-1572-4152-b77b-a6b7ab52a081-machine-approver-tls\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.547050 master-0 kubenswrapper[7926]: I0216 21:10:56.547014 7926 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:10:56.547089 master-0 kubenswrapper[7926]: I0216 21:10:56.547052 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:10:56.549681 master-0 kubenswrapper[7926]: I0216 21:10:56.549599 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-kube-api-access-4qqkf" (OuterVolumeSpecName: "kube-api-access-4qqkf") pod "c62bb2b4-1469-4e0d-810f-cd6e21ee908a" (UID: "c62bb2b4-1469-4e0d-810f-cd6e21ee908a"). InnerVolumeSpecName "kube-api-access-4qqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:10:56.550070 master-0 kubenswrapper[7926]: I0216 21:10:56.550027 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "c62bb2b4-1469-4e0d-810f-cd6e21ee908a" (UID: "c62bb2b4-1469-4e0d-810f-cd6e21ee908a"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:10:56.648467 master-0 kubenswrapper[7926]: I0216 21:10:56.648368 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.648719 master-0 kubenswrapper[7926]: I0216 21:10:56.648531 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trcfg\" (UniqueName: \"kubernetes.io/projected/065fcd43-1572-4152-b77b-a6b7ab52a081-kube-api-access-trcfg\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.648719 master-0 kubenswrapper[7926]: I0216 21:10:56.648577 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/065fcd43-1572-4152-b77b-a6b7ab52a081-machine-approver-tls\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.648719 master-0 kubenswrapper[7926]: I0216 21:10:56.648693 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-auth-proxy-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.648827 master-0 kubenswrapper[7926]: I0216 21:10:56.648800 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qqkf\" (UniqueName: \"kubernetes.io/projected/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-kube-api-access-4qqkf\") on node \"master-0\" DevicePath \"\"" Feb 16 21:10:56.648863 master-0 kubenswrapper[7926]: I0216 21:10:56.648827 7926 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c62bb2b4-1469-4e0d-810f-cd6e21ee908a-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 21:10:56.649101 master-0 kubenswrapper[7926]: I0216 21:10:56.649060 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.650025 master-0 kubenswrapper[7926]: I0216 21:10:56.649992 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-auth-proxy-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.651807 master-0 kubenswrapper[7926]: I0216 21:10:56.651770 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/065fcd43-1572-4152-b77b-a6b7ab52a081-machine-approver-tls\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.654995 master-0 kubenswrapper[7926]: I0216 21:10:56.654944 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-6c46d95f74-2nz2q_c62bb2b4-1469-4e0d-810f-cd6e21ee908a/machine-approver-controller/0.log" Feb 16 21:10:56.655716 master-0 kubenswrapper[7926]: I0216 21:10:56.655643 7926 generic.go:334] "Generic (PLEG): container finished" podID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerID="d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482" exitCode=0 Feb 16 21:10:56.655716 master-0 kubenswrapper[7926]: I0216 21:10:56.655709 7926 generic.go:334] "Generic (PLEG): container finished" podID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" containerID="94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5" exitCode=0 Feb 16 21:10:56.655898 master-0 kubenswrapper[7926]: I0216 21:10:56.655734 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" event={"ID":"c62bb2b4-1469-4e0d-810f-cd6e21ee908a","Type":"ContainerDied","Data":"d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482"} Feb 16 21:10:56.655898 master-0 kubenswrapper[7926]: I0216 21:10:56.655786 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" event={"ID":"c62bb2b4-1469-4e0d-810f-cd6e21ee908a","Type":"ContainerDied","Data":"94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5"} Feb 16 21:10:56.655898 master-0 kubenswrapper[7926]: I0216 21:10:56.655799 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" event={"ID":"c62bb2b4-1469-4e0d-810f-cd6e21ee908a","Type":"ContainerDied","Data":"8bac8203193652171deab4e559a0035a72359c725620330467c4f253c536e2dc"} Feb 16 21:10:56.655898 master-0 kubenswrapper[7926]: I0216 21:10:56.655816 7926 scope.go:117] "RemoveContainer" containerID="d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482" Feb 16 21:10:56.655898 master-0 kubenswrapper[7926]: I0216 21:10:56.655842 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q" Feb 16 21:10:56.673898 master-0 kubenswrapper[7926]: I0216 21:10:56.673829 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trcfg\" (UniqueName: \"kubernetes.io/projected/065fcd43-1572-4152-b77b-a6b7ab52a081-kube-api-access-trcfg\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.682507 master-0 kubenswrapper[7926]: I0216 21:10:56.682460 7926 scope.go:117] "RemoveContainer" containerID="f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4" Feb 16 21:10:56.711492 master-0 kubenswrapper[7926]: I0216 21:10:56.711448 7926 scope.go:117] "RemoveContainer" containerID="94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5" Feb 16 21:10:56.724817 master-0 kubenswrapper[7926]: I0216 21:10:56.724752 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q"] Feb 16 21:10:56.727889 master-0 kubenswrapper[7926]: I0216 21:10:56.727854 7926 scope.go:117] "RemoveContainer" containerID="d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482" Feb 16 21:10:56.728227 master-0 kubenswrapper[7926]: E0216 21:10:56.728189 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482\": container with ID starting with d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482 not found: ID does not exist" containerID="d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482" Feb 16 21:10:56.728227 master-0 kubenswrapper[7926]: I0216 21:10:56.728216 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482"} err="failed to get container status \"d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482\": rpc error: code = NotFound desc = could not find container \"d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482\": container with ID starting with d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482 not found: ID does not exist" Feb 16 21:10:56.728404 master-0 kubenswrapper[7926]: I0216 21:10:56.728235 7926 scope.go:117] "RemoveContainer" containerID="f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4" Feb 16 21:10:56.728614 master-0 kubenswrapper[7926]: E0216 21:10:56.728590 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4\": container with ID starting with f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4 not found: ID does not exist" containerID="f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4" Feb 16 21:10:56.728739 master-0 kubenswrapper[7926]: I0216 21:10:56.728611 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4"} err="failed to get container status \"f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4\": rpc error: code = NotFound desc = could not find container \"f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4\": container with ID starting with f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4 not found: ID does not exist" Feb 16 21:10:56.728739 master-0 kubenswrapper[7926]: I0216 21:10:56.728629 7926 scope.go:117] "RemoveContainer" containerID="94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5" Feb 16 21:10:56.728974 master-0 kubenswrapper[7926]: E0216 21:10:56.728929 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5\": container with ID starting with 94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5 not found: ID does not exist" containerID="94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5" Feb 16 21:10:56.729053 master-0 kubenswrapper[7926]: I0216 21:10:56.728980 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5"} err="failed to get container status \"94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5\": rpc error: code = NotFound desc = could not find container \"94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5\": container with ID starting with 94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5 not found: ID does not exist" Feb 16 21:10:56.729053 master-0 kubenswrapper[7926]: I0216 21:10:56.729001 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q"] Feb 16 21:10:56.729053 master-0 kubenswrapper[7926]: I0216 21:10:56.729008 7926 scope.go:117] "RemoveContainer" containerID="d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482" Feb 16 21:10:56.729484 master-0 kubenswrapper[7926]: I0216 21:10:56.729293 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482"} err="failed to get container status \"d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482\": rpc error: code = NotFound desc = could not find container \"d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482\": container with ID starting with d2a5fc042d08a574ca3280124a277e09811f14400ef340b3621ad88c29f24482 not found: ID does not exist" Feb 16 21:10:56.729484 master-0 kubenswrapper[7926]: I0216 21:10:56.729356 7926 scope.go:117] "RemoveContainer" containerID="f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4" Feb 16 21:10:56.729828 master-0 kubenswrapper[7926]: I0216 21:10:56.729748 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4"} err="failed to get container status \"f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4\": rpc error: code = NotFound desc = could not find container \"f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4\": container with ID starting with f620d164d8f2ed90825e926c6ef1b62a164af6f143a6bcf2e3725b1b1b8889f4 not found: ID does not exist" Feb 16 21:10:56.729828 master-0 kubenswrapper[7926]: I0216 21:10:56.729812 7926 scope.go:117] "RemoveContainer" containerID="94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5" Feb 16 21:10:56.730404 master-0 kubenswrapper[7926]: I0216 21:10:56.730220 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5"} err="failed to get container status \"94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5\": rpc error: code = NotFound desc = could not find container \"94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5\": container with ID starting with 94ccf2a93e956c3b518b8dcb871e7b9e7ffe5710f70065964156b889eef86eb5 not found: ID does not exist" Feb 16 21:10:56.745027 master-0 kubenswrapper[7926]: I0216 21:10:56.744977 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c62bb2b4-1469-4e0d-810f-cd6e21ee908a" path="/var/lib/kubelet/pods/c62bb2b4-1469-4e0d-810f-cd6e21ee908a/volumes" Feb 16 21:10:56.821107 master-0 kubenswrapper[7926]: I0216 21:10:56.820997 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:10:56.843268 master-0 kubenswrapper[7926]: W0216 21:10:56.843199 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod065fcd43_1572_4152_b77b_a6b7ab52a081.slice/crio-b9312957dc15df5de566304a0d01d6c55a3f6333b95b61734ba1c6f29131877b WatchSource:0}: Error finding container b9312957dc15df5de566304a0d01d6c55a3f6333b95b61734ba1c6f29131877b: Status 404 returned error can't find the container with id b9312957dc15df5de566304a0d01d6c55a3f6333b95b61734ba1c6f29131877b Feb 16 21:10:57.184627 master-0 kubenswrapper[7926]: I0216 21:10:57.184558 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:57.184627 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:57.184627 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:57.184627 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:57.186859 master-0 kubenswrapper[7926]: I0216 21:10:57.184644 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:57.664841 master-0 kubenswrapper[7926]: I0216 21:10:57.664776 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" event={"ID":"065fcd43-1572-4152-b77b-a6b7ab52a081","Type":"ContainerStarted","Data":"577a19cb609733c40b24d16a4cfb15f4698079667a2b3110eeef59cec7643dff"} Feb 16 21:10:57.664841 master-0 kubenswrapper[7926]: I0216 21:10:57.664832 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" event={"ID":"065fcd43-1572-4152-b77b-a6b7ab52a081","Type":"ContainerStarted","Data":"09791bd713ecaeccf489060fc2fec30269d2977979f66329e6c0231f6abbbe33"} Feb 16 21:10:57.664841 master-0 kubenswrapper[7926]: I0216 21:10:57.664844 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" event={"ID":"065fcd43-1572-4152-b77b-a6b7ab52a081","Type":"ContainerStarted","Data":"b9312957dc15df5de566304a0d01d6c55a3f6333b95b61734ba1c6f29131877b"} Feb 16 21:10:57.680224 master-0 kubenswrapper[7926]: I0216 21:10:57.680150 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" podStartSLOduration=1.68012823 podStartE2EDuration="1.68012823s" podCreationTimestamp="2026-02-16 21:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:10:57.678426763 +0000 UTC m=+829.313327063" watchObservedRunningTime="2026-02-16 21:10:57.68012823 +0000 UTC m=+829.315028540" Feb 16 21:10:58.185324 master-0 kubenswrapper[7926]: I0216 21:10:58.185248 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:58.185324 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:58.185324 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:58.185324 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:58.185934 master-0 kubenswrapper[7926]: I0216 21:10:58.185349 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:10:59.185024 master-0 kubenswrapper[7926]: I0216 21:10:59.184965 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:10:59.185024 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:10:59.185024 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:10:59.185024 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:10:59.185024 master-0 kubenswrapper[7926]: I0216 21:10:59.185020 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:00.185098 master-0 kubenswrapper[7926]: I0216 21:11:00.184982 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:00.185098 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:00.185098 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:00.185098 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:00.185098 master-0 kubenswrapper[7926]: I0216 21:11:00.185079 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:01.184925 master-0 kubenswrapper[7926]: I0216 21:11:01.184851 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:01.184925 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:01.184925 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:01.184925 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:01.185404 master-0 kubenswrapper[7926]: I0216 21:11:01.184943 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:02.184834 master-0 kubenswrapper[7926]: I0216 21:11:02.184721 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:02.184834 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:02.184834 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:02.184834 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:02.184834 master-0 kubenswrapper[7926]: I0216 21:11:02.184797 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:02.767619 master-0 kubenswrapper[7926]: I0216 21:11:02.767510 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl"] Feb 16 21:11:02.768537 master-0 kubenswrapper[7926]: I0216 21:11:02.767849 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="cluster-cloud-controller-manager" containerID="cri-o://fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d" gracePeriod=30 Feb 16 21:11:02.768537 master-0 kubenswrapper[7926]: I0216 21:11:02.767910 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="config-sync-controllers" containerID="cri-o://22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16" gracePeriod=30 Feb 16 21:11:02.768537 master-0 kubenswrapper[7926]: I0216 21:11:02.767910 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="kube-rbac-proxy" containerID="cri-o://bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65" gracePeriod=30 Feb 16 21:11:02.947082 master-0 kubenswrapper[7926]: I0216 21:11:02.947036 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 21:11:03.051141 master-0 kubenswrapper[7926]: I0216 21:11:03.050821 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-host-etc-kube\") pod \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " Feb 16 21:11:03.051141 master-0 kubenswrapper[7926]: I0216 21:11:03.051018 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" (UID: "150fb1ff-8a9c-4360-8e41-cfbfb854d8bd"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:11:03.051141 master-0 kubenswrapper[7926]: I0216 21:11:03.051104 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-cloud-controller-manager-operator-tls\") pod \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " Feb 16 21:11:03.051837 master-0 kubenswrapper[7926]: I0216 21:11:03.051231 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9hg4\" (UniqueName: \"kubernetes.io/projected/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-kube-api-access-w9hg4\") pod \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " Feb 16 21:11:03.051837 master-0 kubenswrapper[7926]: I0216 21:11:03.051383 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-auth-proxy-config\") pod \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " Feb 16 21:11:03.051837 master-0 kubenswrapper[7926]: I0216 21:11:03.051468 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-images\") pod \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\" (UID: \"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd\") " Feb 16 21:11:03.052056 master-0 kubenswrapper[7926]: I0216 21:11:03.051868 7926 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:03.052056 master-0 kubenswrapper[7926]: I0216 21:11:03.051904 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" (UID: "150fb1ff-8a9c-4360-8e41-cfbfb854d8bd"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:11:03.052056 master-0 kubenswrapper[7926]: I0216 21:11:03.051934 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-images" (OuterVolumeSpecName: "images") pod "150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" (UID: "150fb1ff-8a9c-4360-8e41-cfbfb854d8bd"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:11:03.054132 master-0 kubenswrapper[7926]: I0216 21:11:03.054051 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-kube-api-access-w9hg4" (OuterVolumeSpecName: "kube-api-access-w9hg4") pod "150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" (UID: "150fb1ff-8a9c-4360-8e41-cfbfb854d8bd"). InnerVolumeSpecName "kube-api-access-w9hg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:11:03.054441 master-0 kubenswrapper[7926]: I0216 21:11:03.054372 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" (UID: "150fb1ff-8a9c-4360-8e41-cfbfb854d8bd"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:11:03.153118 master-0 kubenswrapper[7926]: I0216 21:11:03.152771 7926 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:03.153118 master-0 kubenswrapper[7926]: I0216 21:11:03.152838 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9hg4\" (UniqueName: \"kubernetes.io/projected/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-kube-api-access-w9hg4\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:03.153118 master-0 kubenswrapper[7926]: I0216 21:11:03.152860 7926 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:03.153118 master-0 kubenswrapper[7926]: I0216 21:11:03.152879 7926 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd-images\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:03.185627 master-0 kubenswrapper[7926]: I0216 21:11:03.185523 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:03.185627 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:03.185627 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:03.185627 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:03.185627 master-0 kubenswrapper[7926]: I0216 21:11:03.185612 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:03.708721 master-0 kubenswrapper[7926]: I0216 21:11:03.708632 7926 generic.go:334] "Generic (PLEG): container finished" podID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerID="bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65" exitCode=0 Feb 16 21:11:03.708721 master-0 kubenswrapper[7926]: I0216 21:11:03.708701 7926 generic.go:334] "Generic (PLEG): container finished" podID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerID="22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16" exitCode=0 Feb 16 21:11:03.708721 master-0 kubenswrapper[7926]: I0216 21:11:03.708712 7926 generic.go:334] "Generic (PLEG): container finished" podID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerID="fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d" exitCode=0 Feb 16 21:11:03.709034 master-0 kubenswrapper[7926]: I0216 21:11:03.708721 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" event={"ID":"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd","Type":"ContainerDied","Data":"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65"} Feb 16 21:11:03.709034 master-0 kubenswrapper[7926]: I0216 21:11:03.708759 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" Feb 16 21:11:03.709034 master-0 kubenswrapper[7926]: I0216 21:11:03.708792 7926 scope.go:117] "RemoveContainer" containerID="bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65" Feb 16 21:11:03.709034 master-0 kubenswrapper[7926]: I0216 21:11:03.708771 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" event={"ID":"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd","Type":"ContainerDied","Data":"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16"} Feb 16 21:11:03.709034 master-0 kubenswrapper[7926]: I0216 21:11:03.708964 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" event={"ID":"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd","Type":"ContainerDied","Data":"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d"} Feb 16 21:11:03.709034 master-0 kubenswrapper[7926]: I0216 21:11:03.709001 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl" event={"ID":"150fb1ff-8a9c-4360-8e41-cfbfb854d8bd","Type":"ContainerDied","Data":"7c58d0ea6f77f570c6d69fca131a630124b55850297eb43a85d3d771ea9026d8"} Feb 16 21:11:03.728423 master-0 kubenswrapper[7926]: I0216 21:11:03.728371 7926 scope.go:117] "RemoveContainer" containerID="22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16" Feb 16 21:11:03.746211 master-0 kubenswrapper[7926]: I0216 21:11:03.746154 7926 scope.go:117] "RemoveContainer" containerID="fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d" Feb 16 21:11:03.768866 master-0 kubenswrapper[7926]: I0216 21:11:03.768075 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl"] Feb 16 21:11:03.771760 master-0 kubenswrapper[7926]: I0216 21:11:03.771724 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl"] Feb 16 21:11:03.776478 master-0 kubenswrapper[7926]: I0216 21:11:03.776419 7926 scope.go:117] "RemoveContainer" containerID="bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65" Feb 16 21:11:03.777158 master-0 kubenswrapper[7926]: E0216 21:11:03.777119 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65\": container with ID starting with bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65 not found: ID does not exist" containerID="bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65" Feb 16 21:11:03.777218 master-0 kubenswrapper[7926]: I0216 21:11:03.777159 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65"} err="failed to get container status \"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65\": rpc error: code = NotFound desc = could not find container \"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65\": container with ID starting with bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65 not found: ID does not exist" Feb 16 21:11:03.777218 master-0 kubenswrapper[7926]: I0216 21:11:03.777187 7926 scope.go:117] "RemoveContainer" containerID="22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16" Feb 16 21:11:03.777627 master-0 kubenswrapper[7926]: E0216 21:11:03.777595 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16\": container with ID starting with 22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16 not found: ID does not exist" containerID="22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16" Feb 16 21:11:03.777708 master-0 kubenswrapper[7926]: I0216 21:11:03.777623 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16"} err="failed to get container status \"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16\": rpc error: code = NotFound desc = could not find container \"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16\": container with ID starting with 22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16 not found: ID does not exist" Feb 16 21:11:03.777708 master-0 kubenswrapper[7926]: I0216 21:11:03.777639 7926 scope.go:117] "RemoveContainer" containerID="fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d" Feb 16 21:11:03.778131 master-0 kubenswrapper[7926]: E0216 21:11:03.778090 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d\": container with ID starting with fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d not found: ID does not exist" containerID="fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d" Feb 16 21:11:03.778186 master-0 kubenswrapper[7926]: I0216 21:11:03.778130 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d"} err="failed to get container status \"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d\": rpc error: code = NotFound desc = could not find container \"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d\": container with ID starting with fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d not found: ID does not exist" Feb 16 21:11:03.778186 master-0 kubenswrapper[7926]: I0216 21:11:03.778153 7926 scope.go:117] "RemoveContainer" containerID="bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65" Feb 16 21:11:03.778455 master-0 kubenswrapper[7926]: I0216 21:11:03.778426 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65"} err="failed to get container status \"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65\": rpc error: code = NotFound desc = could not find container \"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65\": container with ID starting with bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65 not found: ID does not exist" Feb 16 21:11:03.778455 master-0 kubenswrapper[7926]: I0216 21:11:03.778445 7926 scope.go:117] "RemoveContainer" containerID="22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16" Feb 16 21:11:03.779491 master-0 kubenswrapper[7926]: I0216 21:11:03.779426 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16"} err="failed to get container status \"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16\": rpc error: code = NotFound desc = could not find container \"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16\": container with ID starting with 22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16 not found: ID does not exist" Feb 16 21:11:03.779491 master-0 kubenswrapper[7926]: I0216 21:11:03.779485 7926 scope.go:117] "RemoveContainer" containerID="fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d" Feb 16 21:11:03.780047 master-0 kubenswrapper[7926]: I0216 21:11:03.780011 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d"} err="failed to get container status \"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d\": rpc error: code = NotFound desc = could not find container \"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d\": container with ID starting with fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d not found: ID does not exist" Feb 16 21:11:03.780047 master-0 kubenswrapper[7926]: I0216 21:11:03.780036 7926 scope.go:117] "RemoveContainer" containerID="bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65" Feb 16 21:11:03.780553 master-0 kubenswrapper[7926]: I0216 21:11:03.780514 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65"} err="failed to get container status \"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65\": rpc error: code = NotFound desc = could not find container \"bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65\": container with ID starting with bf1e17f36d332fea126084189dbe19783227f37d7a7652d030ac4a9bc53d3a65 not found: ID does not exist" Feb 16 21:11:03.780553 master-0 kubenswrapper[7926]: I0216 21:11:03.780545 7926 scope.go:117] "RemoveContainer" containerID="22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16" Feb 16 21:11:03.781000 master-0 kubenswrapper[7926]: I0216 21:11:03.780969 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16"} err="failed to get container status \"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16\": rpc error: code = NotFound desc = could not find container \"22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16\": container with ID starting with 22a66dda733f6d85ca94e09c8b616582df1ddc12912925a970247eeedc52dd16 not found: ID does not exist" Feb 16 21:11:03.781000 master-0 kubenswrapper[7926]: I0216 21:11:03.780993 7926 scope.go:117] "RemoveContainer" containerID="fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d" Feb 16 21:11:03.781297 master-0 kubenswrapper[7926]: I0216 21:11:03.781266 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d"} err="failed to get container status \"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d\": rpc error: code = NotFound desc = could not find container \"fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d\": container with ID starting with fe99a5fcebfb8da3c4941f2384bbff5b1b23f59e244ba0b79737c6bbbe01661d not found: ID does not exist" Feb 16 21:11:03.803891 master-0 kubenswrapper[7926]: I0216 21:11:03.803791 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn"] Feb 16 21:11:03.804233 master-0 kubenswrapper[7926]: E0216 21:11:03.804198 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="config-sync-controllers" Feb 16 21:11:03.804233 master-0 kubenswrapper[7926]: I0216 21:11:03.804225 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="config-sync-controllers" Feb 16 21:11:03.804331 master-0 kubenswrapper[7926]: E0216 21:11:03.804254 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="cluster-cloud-controller-manager" Feb 16 21:11:03.804331 master-0 kubenswrapper[7926]: I0216 21:11:03.804265 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="cluster-cloud-controller-manager" Feb 16 21:11:03.804331 master-0 kubenswrapper[7926]: E0216 21:11:03.804280 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="kube-rbac-proxy" Feb 16 21:11:03.804331 master-0 kubenswrapper[7926]: I0216 21:11:03.804292 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="kube-rbac-proxy" Feb 16 21:11:03.804491 master-0 kubenswrapper[7926]: I0216 21:11:03.804462 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="cluster-cloud-controller-manager" Feb 16 21:11:03.804529 master-0 kubenswrapper[7926]: I0216 21:11:03.804492 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="config-sync-controllers" Feb 16 21:11:03.804529 master-0 kubenswrapper[7926]: I0216 21:11:03.804509 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" containerName="kube-rbac-proxy" Feb 16 21:11:03.805826 master-0 kubenswrapper[7926]: I0216 21:11:03.805787 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.808538 master-0 kubenswrapper[7926]: I0216 21:11:03.808495 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 21:11:03.808875 master-0 kubenswrapper[7926]: I0216 21:11:03.808852 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 21:11:03.810345 master-0 kubenswrapper[7926]: I0216 21:11:03.810304 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:11:03.810345 master-0 kubenswrapper[7926]: I0216 21:11:03.810329 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-jswsr" Feb 16 21:11:03.810536 master-0 kubenswrapper[7926]: I0216 21:11:03.810510 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 21:11:03.810575 master-0 kubenswrapper[7926]: I0216 21:11:03.810564 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 21:11:03.871681 master-0 kubenswrapper[7926]: I0216 21:11:03.871611 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.871681 master-0 kubenswrapper[7926]: I0216 21:11:03.871688 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.871967 master-0 kubenswrapper[7926]: I0216 21:11:03.871778 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqkvs\" (UniqueName: \"kubernetes.io/projected/230d9624-2d9d-4036-967b-b530347f05d5-kube-api-access-vqkvs\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.871967 master-0 kubenswrapper[7926]: I0216 21:11:03.871922 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/230d9624-2d9d-4036-967b-b530347f05d5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.872038 master-0 kubenswrapper[7926]: I0216 21:11:03.871984 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/230d9624-2d9d-4036-967b-b530347f05d5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.973753 master-0 kubenswrapper[7926]: I0216 21:11:03.973609 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.973753 master-0 kubenswrapper[7926]: I0216 21:11:03.973699 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.973753 master-0 kubenswrapper[7926]: I0216 21:11:03.973741 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqkvs\" (UniqueName: \"kubernetes.io/projected/230d9624-2d9d-4036-967b-b530347f05d5-kube-api-access-vqkvs\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.974030 master-0 kubenswrapper[7926]: I0216 21:11:03.973773 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/230d9624-2d9d-4036-967b-b530347f05d5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.974030 master-0 kubenswrapper[7926]: I0216 21:11:03.973975 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/230d9624-2d9d-4036-967b-b530347f05d5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.974139 master-0 kubenswrapper[7926]: I0216 21:11:03.974109 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/230d9624-2d9d-4036-967b-b530347f05d5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.974751 master-0 kubenswrapper[7926]: I0216 21:11:03.974712 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.974806 master-0 kubenswrapper[7926]: I0216 21:11:03.974783 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:03.977741 master-0 kubenswrapper[7926]: I0216 21:11:03.977710 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/230d9624-2d9d-4036-967b-b530347f05d5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:04.027448 master-0 kubenswrapper[7926]: I0216 21:11:04.027337 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqkvs\" (UniqueName: \"kubernetes.io/projected/230d9624-2d9d-4036-967b-b530347f05d5-kube-api-access-vqkvs\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:04.135104 master-0 kubenswrapper[7926]: I0216 21:11:04.135039 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:11:04.155668 master-0 kubenswrapper[7926]: W0216 21:11:04.155585 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod230d9624_2d9d_4036_967b_b530347f05d5.slice/crio-edc9559c5a629f79661ac5fd3b656fc66e5b478f6eb97f32c266188a17c0e747 WatchSource:0}: Error finding container edc9559c5a629f79661ac5fd3b656fc66e5b478f6eb97f32c266188a17c0e747: Status 404 returned error can't find the container with id edc9559c5a629f79661ac5fd3b656fc66e5b478f6eb97f32c266188a17c0e747 Feb 16 21:11:04.185747 master-0 kubenswrapper[7926]: I0216 21:11:04.185677 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:04.185747 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:04.185747 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:04.185747 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:04.186020 master-0 kubenswrapper[7926]: I0216 21:11:04.185755 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:04.315365 master-0 kubenswrapper[7926]: I0216 21:11:04.315299 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c6548b89f-s8dv7"] Feb 16 21:11:04.334623 master-0 kubenswrapper[7926]: I0216 21:11:04.333266 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf"] Feb 16 21:11:04.334623 master-0 kubenswrapper[7926]: I0216 21:11:04.333626 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" containerID="cri-o://76be2fb9017c6c391da7666ee8357be5d76c275a9752c228eacdc1c1d9610f90" gracePeriod=30 Feb 16 21:11:04.727122 master-0 kubenswrapper[7926]: I0216 21:11:04.727065 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-749ccd9c56-wzsnf_4db59450-da78-4879-ada8-ca3fc49fb7a7/route-controller-manager/4.log" Feb 16 21:11:04.727539 master-0 kubenswrapper[7926]: I0216 21:11:04.727136 7926 generic.go:334] "Generic (PLEG): container finished" podID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerID="76be2fb9017c6c391da7666ee8357be5d76c275a9752c228eacdc1c1d9610f90" exitCode=0 Feb 16 21:11:04.727539 master-0 kubenswrapper[7926]: I0216 21:11:04.727223 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerDied","Data":"76be2fb9017c6c391da7666ee8357be5d76c275a9752c228eacdc1c1d9610f90"} Feb 16 21:11:04.727539 master-0 kubenswrapper[7926]: I0216 21:11:04.727282 7926 scope.go:117] "RemoveContainer" containerID="8fdaced2e29680218985b0af6c01e1d1666c4413685a11533b854af5a3b4a954" Feb 16 21:11:04.731891 master-0 kubenswrapper[7926]: I0216 21:11:04.731842 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:11:04.732259 master-0 kubenswrapper[7926]: I0216 21:11:04.731819 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" event={"ID":"230d9624-2d9d-4036-967b-b530347f05d5","Type":"ContainerStarted","Data":"c6a10327cd99b8e79080c80497f813f0d306c1ac1675a6ef75f827c739b664b0"} Feb 16 21:11:04.732259 master-0 kubenswrapper[7926]: I0216 21:11:04.732176 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" event={"ID":"230d9624-2d9d-4036-967b-b530347f05d5","Type":"ContainerStarted","Data":"e5e5cf205d35c77f7135aae32a2a2b5d93190fd24142a46403057a66617d7317"} Feb 16 21:11:04.732259 master-0 kubenswrapper[7926]: I0216 21:11:04.732207 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" event={"ID":"230d9624-2d9d-4036-967b-b530347f05d5","Type":"ContainerStarted","Data":"edc9559c5a629f79661ac5fd3b656fc66e5b478f6eb97f32c266188a17c0e747"} Feb 16 21:11:04.733771 master-0 kubenswrapper[7926]: I0216 21:11:04.733732 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" containerID="cri-o://d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0" gracePeriod=30 Feb 16 21:11:04.757026 master-0 kubenswrapper[7926]: I0216 21:11:04.756928 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150fb1ff-8a9c-4360-8e41-cfbfb854d8bd" path="/var/lib/kubelet/pods/150fb1ff-8a9c-4360-8e41-cfbfb854d8bd/volumes" Feb 16 21:11:04.788207 master-0 kubenswrapper[7926]: I0216 21:11:04.788044 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-client-ca\") pod \"4db59450-da78-4879-ada8-ca3fc49fb7a7\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " Feb 16 21:11:04.788207 master-0 kubenswrapper[7926]: I0216 21:11:04.788134 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4db59450-da78-4879-ada8-ca3fc49fb7a7-serving-cert\") pod \"4db59450-da78-4879-ada8-ca3fc49fb7a7\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " Feb 16 21:11:04.788207 master-0 kubenswrapper[7926]: I0216 21:11:04.788160 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-config\") pod \"4db59450-da78-4879-ada8-ca3fc49fb7a7\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " Feb 16 21:11:04.788207 master-0 kubenswrapper[7926]: I0216 21:11:04.788193 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67nzn\" (UniqueName: \"kubernetes.io/projected/4db59450-da78-4879-ada8-ca3fc49fb7a7-kube-api-access-67nzn\") pod \"4db59450-da78-4879-ada8-ca3fc49fb7a7\" (UID: \"4db59450-da78-4879-ada8-ca3fc49fb7a7\") " Feb 16 21:11:04.792766 master-0 kubenswrapper[7926]: I0216 21:11:04.789165 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-config" (OuterVolumeSpecName: "config") pod "4db59450-da78-4879-ada8-ca3fc49fb7a7" (UID: "4db59450-da78-4879-ada8-ca3fc49fb7a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:11:04.792766 master-0 kubenswrapper[7926]: I0216 21:11:04.789137 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-client-ca" (OuterVolumeSpecName: "client-ca") pod "4db59450-da78-4879-ada8-ca3fc49fb7a7" (UID: "4db59450-da78-4879-ada8-ca3fc49fb7a7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:11:04.792766 master-0 kubenswrapper[7926]: I0216 21:11:04.792736 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4db59450-da78-4879-ada8-ca3fc49fb7a7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4db59450-da78-4879-ada8-ca3fc49fb7a7" (UID: "4db59450-da78-4879-ada8-ca3fc49fb7a7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:11:04.796187 master-0 kubenswrapper[7926]: I0216 21:11:04.796127 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4db59450-da78-4879-ada8-ca3fc49fb7a7-kube-api-access-67nzn" (OuterVolumeSpecName: "kube-api-access-67nzn") pod "4db59450-da78-4879-ada8-ca3fc49fb7a7" (UID: "4db59450-da78-4879-ada8-ca3fc49fb7a7"). InnerVolumeSpecName "kube-api-access-67nzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:11:04.890291 master-0 kubenswrapper[7926]: I0216 21:11:04.890228 7926 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:04.890291 master-0 kubenswrapper[7926]: I0216 21:11:04.890277 7926 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4db59450-da78-4879-ada8-ca3fc49fb7a7-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:04.890291 master-0 kubenswrapper[7926]: I0216 21:11:04.890290 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4db59450-da78-4879-ada8-ca3fc49fb7a7-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:04.890291 master-0 kubenswrapper[7926]: I0216 21:11:04.890304 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67nzn\" (UniqueName: \"kubernetes.io/projected/4db59450-da78-4879-ada8-ca3fc49fb7a7-kube-api-access-67nzn\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:05.123701 master-0 kubenswrapper[7926]: I0216 21:11:05.123117 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 21:11:05.190739 master-0 kubenswrapper[7926]: I0216 21:11:05.188051 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:05.190739 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:05.190739 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:05.190739 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:05.190739 master-0 kubenswrapper[7926]: I0216 21:11:05.188165 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:05.195577 master-0 kubenswrapper[7926]: I0216 21:11:05.195315 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-proxy-ca-bundles\") pod \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " Feb 16 21:11:05.195577 master-0 kubenswrapper[7926]: I0216 21:11:05.195410 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-client-ca\") pod \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " Feb 16 21:11:05.195577 master-0 kubenswrapper[7926]: I0216 21:11:05.195484 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-config\") pod \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " Feb 16 21:11:05.195577 master-0 kubenswrapper[7926]: I0216 21:11:05.195553 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zv2b\" (UniqueName: \"kubernetes.io/projected/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-kube-api-access-7zv2b\") pod \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " Feb 16 21:11:05.196037 master-0 kubenswrapper[7926]: I0216 21:11:05.195692 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-serving-cert\") pod \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\" (UID: \"57b94ed4-8f0b-4223-bdaf-4316859d8ad3\") " Feb 16 21:11:05.196883 master-0 kubenswrapper[7926]: I0216 21:11:05.196854 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-client-ca" (OuterVolumeSpecName: "client-ca") pod "57b94ed4-8f0b-4223-bdaf-4316859d8ad3" (UID: "57b94ed4-8f0b-4223-bdaf-4316859d8ad3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:11:05.197294 master-0 kubenswrapper[7926]: I0216 21:11:05.196879 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "57b94ed4-8f0b-4223-bdaf-4316859d8ad3" (UID: "57b94ed4-8f0b-4223-bdaf-4316859d8ad3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:11:05.197455 master-0 kubenswrapper[7926]: I0216 21:11:05.197410 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-config" (OuterVolumeSpecName: "config") pod "57b94ed4-8f0b-4223-bdaf-4316859d8ad3" (UID: "57b94ed4-8f0b-4223-bdaf-4316859d8ad3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:11:05.199029 master-0 kubenswrapper[7926]: I0216 21:11:05.198972 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "57b94ed4-8f0b-4223-bdaf-4316859d8ad3" (UID: "57b94ed4-8f0b-4223-bdaf-4316859d8ad3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:11:05.199245 master-0 kubenswrapper[7926]: I0216 21:11:05.199201 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-kube-api-access-7zv2b" (OuterVolumeSpecName: "kube-api-access-7zv2b") pod "57b94ed4-8f0b-4223-bdaf-4316859d8ad3" (UID: "57b94ed4-8f0b-4223-bdaf-4316859d8ad3"). InnerVolumeSpecName "kube-api-access-7zv2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:11:05.300130 master-0 kubenswrapper[7926]: I0216 21:11:05.300012 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zv2b\" (UniqueName: \"kubernetes.io/projected/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-kube-api-access-7zv2b\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:05.300130 master-0 kubenswrapper[7926]: I0216 21:11:05.300112 7926 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:05.300130 master-0 kubenswrapper[7926]: I0216 21:11:05.300140 7926 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:05.300805 master-0 kubenswrapper[7926]: I0216 21:11:05.300165 7926 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:05.300805 master-0 kubenswrapper[7926]: I0216 21:11:05.300194 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57b94ed4-8f0b-4223-bdaf-4316859d8ad3-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:05.743226 master-0 kubenswrapper[7926]: I0216 21:11:05.743142 7926 generic.go:334] "Generic (PLEG): container finished" podID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerID="d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0" exitCode=0 Feb 16 21:11:05.743226 master-0 kubenswrapper[7926]: I0216 21:11:05.743228 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" Feb 16 21:11:05.743714 master-0 kubenswrapper[7926]: I0216 21:11:05.743257 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" event={"ID":"57b94ed4-8f0b-4223-bdaf-4316859d8ad3","Type":"ContainerDied","Data":"d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0"} Feb 16 21:11:05.743714 master-0 kubenswrapper[7926]: I0216 21:11:05.743342 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c6548b89f-s8dv7" event={"ID":"57b94ed4-8f0b-4223-bdaf-4316859d8ad3","Type":"ContainerDied","Data":"b1181fe67b605ba3682cb72aadab485f579f30f6cec1251b516fac8e19f9c298"} Feb 16 21:11:05.743714 master-0 kubenswrapper[7926]: I0216 21:11:05.743378 7926 scope.go:117] "RemoveContainer" containerID="d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0" Feb 16 21:11:05.745625 master-0 kubenswrapper[7926]: I0216 21:11:05.745586 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" event={"ID":"4db59450-da78-4879-ada8-ca3fc49fb7a7","Type":"ContainerDied","Data":"ff3056d39fbc51a0db62d052e0051f801497ab64b4c704d9bed90917e0c30ddd"} Feb 16 21:11:05.745734 master-0 kubenswrapper[7926]: I0216 21:11:05.745701 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf" Feb 16 21:11:05.749982 master-0 kubenswrapper[7926]: I0216 21:11:05.749935 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" event={"ID":"230d9624-2d9d-4036-967b-b530347f05d5","Type":"ContainerStarted","Data":"28726678c7ba973c7a8d12bd4e7dd23ac1f0cc7291e6d51f4f07e0ddb5f2952b"} Feb 16 21:11:05.776063 master-0 kubenswrapper[7926]: I0216 21:11:05.776005 7926 scope.go:117] "RemoveContainer" containerID="03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3" Feb 16 21:11:05.796006 master-0 kubenswrapper[7926]: I0216 21:11:05.795924 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" podStartSLOduration=2.795903927 podStartE2EDuration="2.795903927s" podCreationTimestamp="2026-02-16 21:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:11:05.794799566 +0000 UTC m=+837.429699866" watchObservedRunningTime="2026-02-16 21:11:05.795903927 +0000 UTC m=+837.430804227" Feb 16 21:11:05.814623 master-0 kubenswrapper[7926]: I0216 21:11:05.814553 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf"] Feb 16 21:11:05.817066 master-0 kubenswrapper[7926]: I0216 21:11:05.816716 7926 scope.go:117] "RemoveContainer" containerID="d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0" Feb 16 21:11:05.817332 master-0 kubenswrapper[7926]: E0216 21:11:05.817268 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0\": container with ID starting with d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0 not found: ID does not exist" containerID="d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0" Feb 16 21:11:05.817429 master-0 kubenswrapper[7926]: I0216 21:11:05.817322 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0"} err="failed to get container status \"d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0\": rpc error: code = NotFound desc = could not find container \"d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0\": container with ID starting with d68a6c7f7b51e7d79b8bb7156985004605d699d7600ac79943f3f38a1fcadff0 not found: ID does not exist" Feb 16 21:11:05.817429 master-0 kubenswrapper[7926]: I0216 21:11:05.817356 7926 scope.go:117] "RemoveContainer" containerID="03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3" Feb 16 21:11:05.818060 master-0 kubenswrapper[7926]: E0216 21:11:05.817986 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3\": container with ID starting with 03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3 not found: ID does not exist" containerID="03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3" Feb 16 21:11:05.818168 master-0 kubenswrapper[7926]: I0216 21:11:05.818068 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3"} err="failed to get container status \"03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3\": rpc error: code = NotFound desc = could not find container \"03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3\": container with ID starting with 03a2959cd7d7099deb65fa1d96597cd3ebf6031635df4c580705d88b4f782bc3 not found: ID does not exist" Feb 16 21:11:05.818168 master-0 kubenswrapper[7926]: I0216 21:11:05.818106 7926 scope.go:117] "RemoveContainer" containerID="76be2fb9017c6c391da7666ee8357be5d76c275a9752c228eacdc1c1d9610f90" Feb 16 21:11:05.819099 master-0 kubenswrapper[7926]: I0216 21:11:05.819034 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf"] Feb 16 21:11:05.827968 master-0 kubenswrapper[7926]: I0216 21:11:05.827906 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c6548b89f-s8dv7"] Feb 16 21:11:05.834378 master-0 kubenswrapper[7926]: I0216 21:11:05.834310 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c6548b89f-s8dv7"] Feb 16 21:11:06.186592 master-0 kubenswrapper[7926]: I0216 21:11:06.186529 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:06.186592 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:06.186592 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:06.186592 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:06.186864 master-0 kubenswrapper[7926]: I0216 21:11:06.186600 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:06.351287 master-0 kubenswrapper[7926]: I0216 21:11:06.351203 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6998cd96fb-bgcb2"] Feb 16 21:11:06.351633 master-0 kubenswrapper[7926]: E0216 21:11:06.351589 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.351633 master-0 kubenswrapper[7926]: I0216 21:11:06.351620 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.351821 master-0 kubenswrapper[7926]: E0216 21:11:06.351642 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.351821 master-0 kubenswrapper[7926]: I0216 21:11:06.351683 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.351821 master-0 kubenswrapper[7926]: E0216 21:11:06.351716 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.351821 master-0 kubenswrapper[7926]: I0216 21:11:06.351730 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.351821 master-0 kubenswrapper[7926]: E0216 21:11:06.351748 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.351821 master-0 kubenswrapper[7926]: I0216 21:11:06.351760 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.351821 master-0 kubenswrapper[7926]: E0216 21:11:06.351778 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.351821 master-0 kubenswrapper[7926]: I0216 21:11:06.351790 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.351821 master-0 kubenswrapper[7926]: E0216 21:11:06.351814 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.351821 master-0 kubenswrapper[7926]: I0216 21:11:06.351826 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.352386 master-0 kubenswrapper[7926]: E0216 21:11:06.351862 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" Feb 16 21:11:06.352386 master-0 kubenswrapper[7926]: I0216 21:11:06.351875 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" Feb 16 21:11:06.352386 master-0 kubenswrapper[7926]: E0216 21:11:06.351897 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" Feb 16 21:11:06.352386 master-0 kubenswrapper[7926]: I0216 21:11:06.351909 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" Feb 16 21:11:06.352386 master-0 kubenswrapper[7926]: I0216 21:11:06.352146 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.352386 master-0 kubenswrapper[7926]: I0216 21:11:06.352172 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" Feb 16 21:11:06.352386 master-0 kubenswrapper[7926]: I0216 21:11:06.352190 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.352386 master-0 kubenswrapper[7926]: I0216 21:11:06.352216 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.352386 master-0 kubenswrapper[7926]: I0216 21:11:06.352230 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.352928 master-0 kubenswrapper[7926]: I0216 21:11:06.352863 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.354274 master-0 kubenswrapper[7926]: I0216 21:11:06.354222 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24"] Feb 16 21:11:06.354948 master-0 kubenswrapper[7926]: I0216 21:11:06.354908 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.354948 master-0 kubenswrapper[7926]: I0216 21:11:06.354928 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" containerName="route-controller-manager" Feb 16 21:11:06.354948 master-0 kubenswrapper[7926]: I0216 21:11:06.354950 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" containerName="controller-manager" Feb 16 21:11:06.355274 master-0 kubenswrapper[7926]: I0216 21:11:06.355238 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.356147 master-0 kubenswrapper[7926]: I0216 21:11:06.356097 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 21:11:06.356240 master-0 kubenswrapper[7926]: I0216 21:11:06.356118 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 21:11:06.356240 master-0 kubenswrapper[7926]: I0216 21:11:06.356097 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 21:11:06.356499 master-0 kubenswrapper[7926]: I0216 21:11:06.356415 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 21:11:06.359010 master-0 kubenswrapper[7926]: I0216 21:11:06.358966 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:11:06.359111 master-0 kubenswrapper[7926]: I0216 21:11:06.359051 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-zlh9q" Feb 16 21:11:06.359188 master-0 kubenswrapper[7926]: I0216 21:11:06.359148 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:11:06.359259 master-0 kubenswrapper[7926]: I0216 21:11:06.359231 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:11:06.359330 master-0 kubenswrapper[7926]: I0216 21:11:06.358978 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:11:06.360670 master-0 kubenswrapper[7926]: I0216 21:11:06.360602 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-vvh6n" Feb 16 21:11:06.360764 master-0 kubenswrapper[7926]: I0216 21:11:06.360718 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 21:11:06.360844 master-0 kubenswrapper[7926]: I0216 21:11:06.360798 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:11:06.371741 master-0 kubenswrapper[7926]: I0216 21:11:06.371553 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6998cd96fb-bgcb2"] Feb 16 21:11:06.376027 master-0 kubenswrapper[7926]: I0216 21:11:06.375319 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 21:11:06.385166 master-0 kubenswrapper[7926]: I0216 21:11:06.385090 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24"] Feb 16 21:11:06.416509 master-0 kubenswrapper[7926]: I0216 21:11:06.416451 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.416509 master-0 kubenswrapper[7926]: I0216 21:11:06.416503 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.416746 master-0 kubenswrapper[7926]: I0216 21:11:06.416536 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.416746 master-0 kubenswrapper[7926]: I0216 21:11:06.416556 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.416746 master-0 kubenswrapper[7926]: I0216 21:11:06.416725 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvw2m\" (UniqueName: \"kubernetes.io/projected/408a9364-3730-4017-b1e4-c85d6a504168-kube-api-access-lvw2m\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.416842 master-0 kubenswrapper[7926]: I0216 21:11:06.416779 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.416842 master-0 kubenswrapper[7926]: I0216 21:11:06.416829 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.416906 master-0 kubenswrapper[7926]: I0216 21:11:06.416855 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc47v\" (UniqueName: \"kubernetes.io/projected/1489d1b6-d8a1-453a-bff3-8adfd4335903-kube-api-access-xc47v\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.416964 master-0 kubenswrapper[7926]: I0216 21:11:06.416936 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.518947 master-0 kubenswrapper[7926]: I0216 21:11:06.518868 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.519183 master-0 kubenswrapper[7926]: I0216 21:11:06.519002 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.519183 master-0 kubenswrapper[7926]: I0216 21:11:06.519139 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.519253 master-0 kubenswrapper[7926]: I0216 21:11:06.519187 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.519253 master-0 kubenswrapper[7926]: I0216 21:11:06.519205 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.519253 master-0 kubenswrapper[7926]: I0216 21:11:06.519233 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvw2m\" (UniqueName: \"kubernetes.io/projected/408a9364-3730-4017-b1e4-c85d6a504168-kube-api-access-lvw2m\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.519871 master-0 kubenswrapper[7926]: I0216 21:11:06.519838 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.520060 master-0 kubenswrapper[7926]: I0216 21:11:06.520038 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.520121 master-0 kubenswrapper[7926]: I0216 21:11:06.520080 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc47v\" (UniqueName: \"kubernetes.io/projected/1489d1b6-d8a1-453a-bff3-8adfd4335903-kube-api-access-xc47v\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.521259 master-0 kubenswrapper[7926]: I0216 21:11:06.521209 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.521322 master-0 kubenswrapper[7926]: I0216 21:11:06.521274 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.521395 master-0 kubenswrapper[7926]: I0216 21:11:06.521364 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.521440 master-0 kubenswrapper[7926]: I0216 21:11:06.521365 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.523019 master-0 kubenswrapper[7926]: I0216 21:11:06.522938 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.523221 master-0 kubenswrapper[7926]: I0216 21:11:06.523183 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.523686 master-0 kubenswrapper[7926]: I0216 21:11:06.523630 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.538442 master-0 kubenswrapper[7926]: I0216 21:11:06.538357 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvw2m\" (UniqueName: \"kubernetes.io/projected/408a9364-3730-4017-b1e4-c85d6a504168-kube-api-access-lvw2m\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.538606 master-0 kubenswrapper[7926]: I0216 21:11:06.538540 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc47v\" (UniqueName: \"kubernetes.io/projected/1489d1b6-d8a1-453a-bff3-8adfd4335903-kube-api-access-xc47v\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.684930 master-0 kubenswrapper[7926]: I0216 21:11:06.684837 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:06.709015 master-0 kubenswrapper[7926]: I0216 21:11:06.708943 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:06.746574 master-0 kubenswrapper[7926]: I0216 21:11:06.746506 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4db59450-da78-4879-ada8-ca3fc49fb7a7" path="/var/lib/kubelet/pods/4db59450-da78-4879-ada8-ca3fc49fb7a7/volumes" Feb 16 21:11:06.748073 master-0 kubenswrapper[7926]: I0216 21:11:06.748032 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b94ed4-8f0b-4223-bdaf-4316859d8ad3" path="/var/lib/kubelet/pods/57b94ed4-8f0b-4223-bdaf-4316859d8ad3/volumes" Feb 16 21:11:07.150329 master-0 kubenswrapper[7926]: I0216 21:11:07.150168 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6998cd96fb-bgcb2"] Feb 16 21:11:07.162434 master-0 kubenswrapper[7926]: W0216 21:11:07.162350 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod408a9364_3730_4017_b1e4_c85d6a504168.slice/crio-f6ba9fbde2ec0f2099ab53176d9410c4bf53a78507ca46eeb7e91c2f36c118ed WatchSource:0}: Error finding container f6ba9fbde2ec0f2099ab53176d9410c4bf53a78507ca46eeb7e91c2f36c118ed: Status 404 returned error can't find the container with id f6ba9fbde2ec0f2099ab53176d9410c4bf53a78507ca46eeb7e91c2f36c118ed Feb 16 21:11:07.191778 master-0 kubenswrapper[7926]: I0216 21:11:07.191721 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:07.191778 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:07.191778 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:07.191778 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:07.192032 master-0 kubenswrapper[7926]: I0216 21:11:07.191798 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:07.216705 master-0 kubenswrapper[7926]: I0216 21:11:07.216617 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24"] Feb 16 21:11:07.222591 master-0 kubenswrapper[7926]: W0216 21:11:07.222555 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1489d1b6_d8a1_453a_bff3_8adfd4335903.slice/crio-d3122711a170f449cbae155070984deb894c3febeb5926b33f03b31158614e34 WatchSource:0}: Error finding container d3122711a170f449cbae155070984deb894c3febeb5926b33f03b31158614e34: Status 404 returned error can't find the container with id d3122711a170f449cbae155070984deb894c3febeb5926b33f03b31158614e34 Feb 16 21:11:07.781623 master-0 kubenswrapper[7926]: I0216 21:11:07.781541 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" event={"ID":"408a9364-3730-4017-b1e4-c85d6a504168","Type":"ContainerStarted","Data":"ec8ce2b77f9d3d1712f1d9e5d59ca2196200eb54635d01b0d1caf94494809751"} Feb 16 21:11:07.781874 master-0 kubenswrapper[7926]: I0216 21:11:07.781694 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:07.781874 master-0 kubenswrapper[7926]: I0216 21:11:07.781720 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" event={"ID":"408a9364-3730-4017-b1e4-c85d6a504168","Type":"ContainerStarted","Data":"f6ba9fbde2ec0f2099ab53176d9410c4bf53a78507ca46eeb7e91c2f36c118ed"} Feb 16 21:11:07.783174 master-0 kubenswrapper[7926]: I0216 21:11:07.782947 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" event={"ID":"1489d1b6-d8a1-453a-bff3-8adfd4335903","Type":"ContainerStarted","Data":"25ee620a91a11cdfcf10f317458e9833777a7250c9af0cd0962ed366c5d07a92"} Feb 16 21:11:07.783174 master-0 kubenswrapper[7926]: I0216 21:11:07.783008 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" event={"ID":"1489d1b6-d8a1-453a-bff3-8adfd4335903","Type":"ContainerStarted","Data":"d3122711a170f449cbae155070984deb894c3febeb5926b33f03b31158614e34"} Feb 16 21:11:07.783301 master-0 kubenswrapper[7926]: I0216 21:11:07.783285 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:07.786909 master-0 kubenswrapper[7926]: I0216 21:11:07.786876 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:11:07.798918 master-0 kubenswrapper[7926]: I0216 21:11:07.798860 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" podStartSLOduration=3.798838374 podStartE2EDuration="3.798838374s" podCreationTimestamp="2026-02-16 21:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:11:07.798096284 +0000 UTC m=+839.432996574" watchObservedRunningTime="2026-02-16 21:11:07.798838374 +0000 UTC m=+839.433738664" Feb 16 21:11:07.845371 master-0 kubenswrapper[7926]: I0216 21:11:07.845204 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" podStartSLOduration=3.845181934 podStartE2EDuration="3.845181934s" podCreationTimestamp="2026-02-16 21:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:11:07.843705233 +0000 UTC m=+839.478605523" watchObservedRunningTime="2026-02-16 21:11:07.845181934 +0000 UTC m=+839.480082234" Feb 16 21:11:07.850460 master-0 kubenswrapper[7926]: I0216 21:11:07.850289 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:11:08.185199 master-0 kubenswrapper[7926]: I0216 21:11:08.185041 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:08.185199 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:08.185199 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:08.185199 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:08.185199 master-0 kubenswrapper[7926]: I0216 21:11:08.185118 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:08.298265 master-0 kubenswrapper[7926]: I0216 21:11:08.298131 7926 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 16 21:11:08.298961 master-0 kubenswrapper[7926]: I0216 21:11:08.298893 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" containerID="cri-o://9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4" gracePeriod=30 Feb 16 21:11:08.298961 master-0 kubenswrapper[7926]: I0216 21:11:08.298943 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" containerID="cri-o://dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809" gracePeriod=30 Feb 16 21:11:08.299129 master-0 kubenswrapper[7926]: I0216 21:11:08.298889 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" containerID="cri-o://6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21" gracePeriod=30 Feb 16 21:11:08.299129 master-0 kubenswrapper[7926]: I0216 21:11:08.298932 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" containerID="cri-o://5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a" gracePeriod=30 Feb 16 21:11:08.299338 master-0 kubenswrapper[7926]: I0216 21:11:08.298906 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" containerID="cri-o://23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861" gracePeriod=30 Feb 16 21:11:08.300933 master-0 kubenswrapper[7926]: I0216 21:11:08.300874 7926 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 16 21:11:08.301202 master-0 kubenswrapper[7926]: E0216 21:11:08.301131 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" Feb 16 21:11:08.301202 master-0 kubenswrapper[7926]: I0216 21:11:08.301152 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" Feb 16 21:11:08.301202 master-0 kubenswrapper[7926]: E0216 21:11:08.301172 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" Feb 16 21:11:08.301202 master-0 kubenswrapper[7926]: I0216 21:11:08.301179 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" Feb 16 21:11:08.301202 master-0 kubenswrapper[7926]: E0216 21:11:08.301190 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-ensure-env-vars" Feb 16 21:11:08.301202 master-0 kubenswrapper[7926]: I0216 21:11:08.301198 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-ensure-env-vars" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: E0216 21:11:08.301222 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-resources-copy" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: I0216 21:11:08.301230 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-resources-copy" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: E0216 21:11:08.301241 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: I0216 21:11:08.301249 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: E0216 21:11:08.301260 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="setup" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: I0216 21:11:08.301267 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="setup" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: E0216 21:11:08.301278 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: I0216 21:11:08.301285 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: E0216 21:11:08.301294 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: I0216 21:11:08.301304 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: I0216 21:11:08.301423 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-readyz" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: I0216 21:11:08.301446 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcdctl" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: I0216 21:11:08.301466 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-metrics" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: I0216 21:11:08.301478 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd-rev" Feb 16 21:11:08.301601 master-0 kubenswrapper[7926]: I0216 21:11:08.301487 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="401699cb53e7098157e808a83125b0e4" containerName="etcd" Feb 16 21:11:08.350958 master-0 kubenswrapper[7926]: I0216 21:11:08.350875 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.350958 master-0 kubenswrapper[7926]: I0216 21:11:08.350951 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.351131 master-0 kubenswrapper[7926]: I0216 21:11:08.350979 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.351131 master-0 kubenswrapper[7926]: I0216 21:11:08.351042 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.351221 master-0 kubenswrapper[7926]: I0216 21:11:08.351150 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.351430 master-0 kubenswrapper[7926]: I0216 21:11:08.351394 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.452945 master-0 kubenswrapper[7926]: I0216 21:11:08.452780 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.452945 master-0 kubenswrapper[7926]: I0216 21:11:08.452850 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.452945 master-0 kubenswrapper[7926]: I0216 21:11:08.452909 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.453279 master-0 kubenswrapper[7926]: I0216 21:11:08.452963 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.453279 master-0 kubenswrapper[7926]: I0216 21:11:08.453054 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.453279 master-0 kubenswrapper[7926]: I0216 21:11:08.453058 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.453279 master-0 kubenswrapper[7926]: I0216 21:11:08.453079 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.453279 master-0 kubenswrapper[7926]: I0216 21:11:08.453079 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.453279 master-0 kubenswrapper[7926]: I0216 21:11:08.453118 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.453279 master-0 kubenswrapper[7926]: I0216 21:11:08.453122 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.453732 master-0 kubenswrapper[7926]: I0216 21:11:08.453322 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.453732 master-0 kubenswrapper[7926]: I0216 21:11:08.453399 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:11:08.793544 master-0 kubenswrapper[7926]: I0216 21:11:08.793459 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 16 21:11:08.794903 master-0 kubenswrapper[7926]: I0216 21:11:08.794854 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 16 21:11:08.797282 master-0 kubenswrapper[7926]: I0216 21:11:08.797230 7926 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a" exitCode=2 Feb 16 21:11:08.797282 master-0 kubenswrapper[7926]: I0216 21:11:08.797269 7926 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21" exitCode=0 Feb 16 21:11:08.797282 master-0 kubenswrapper[7926]: I0216 21:11:08.797280 7926 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861" exitCode=2 Feb 16 21:11:09.184780 master-0 kubenswrapper[7926]: I0216 21:11:09.184714 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:09.184780 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:09.184780 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:09.184780 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:09.185078 master-0 kubenswrapper[7926]: I0216 21:11:09.184788 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:10.185469 master-0 kubenswrapper[7926]: I0216 21:11:10.185354 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:10.185469 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:10.185469 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:10.185469 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:10.187044 master-0 kubenswrapper[7926]: I0216 21:11:10.185503 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:11.185313 master-0 kubenswrapper[7926]: I0216 21:11:11.185240 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:11.185313 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:11.185313 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:11.185313 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:11.185313 master-0 kubenswrapper[7926]: I0216 21:11:11.185302 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:12.185407 master-0 kubenswrapper[7926]: I0216 21:11:12.185327 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:12.185407 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:12.185407 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:12.185407 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:12.185986 master-0 kubenswrapper[7926]: I0216 21:11:12.185418 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:13.184708 master-0 kubenswrapper[7926]: I0216 21:11:13.184616 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:13.184708 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:13.184708 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:13.184708 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:13.184708 master-0 kubenswrapper[7926]: I0216 21:11:13.184701 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:14.185485 master-0 kubenswrapper[7926]: I0216 21:11:14.185359 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:11:14.185485 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:11:14.185485 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:11:14.185485 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:11:14.185485 master-0 kubenswrapper[7926]: I0216 21:11:14.185474 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:11:14.186834 master-0 kubenswrapper[7926]: I0216 21:11:14.185554 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:11:14.186834 master-0 kubenswrapper[7926]: I0216 21:11:14.186404 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"922b3b9a2ab72ca8bb93946974e3710fc89f41db642b5f99391c37114b12712f"} pod="openshift-ingress/router-default-864ddd5f56-z4bnk" containerMessage="Container router failed startup probe, will be restarted" Feb 16 21:11:14.186834 master-0 kubenswrapper[7926]: I0216 21:11:14.186461 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" containerID="cri-o://922b3b9a2ab72ca8bb93946974e3710fc89f41db642b5f99391c37114b12712f" gracePeriod=3600 Feb 16 21:11:14.642553 master-0 kubenswrapper[7926]: I0216 21:11:14.642486 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:11:14.642802 master-0 kubenswrapper[7926]: E0216 21:11:14.642713 7926 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 21:11:14.643056 master-0 kubenswrapper[7926]: E0216 21:11:14.642952 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert podName:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b nodeName:}" failed. No retries permitted until 2026-02-16 21:12:18.642865308 +0000 UTC m=+910.277765628 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert") pod "ingress-canary-l44qd" (UID: "0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b") : secret "canary-serving-cert" not found Feb 16 21:11:20.881074 master-0 kubenswrapper[7926]: I0216 21:11:20.880978 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" exitCode=1 Feb 16 21:11:20.881776 master-0 kubenswrapper[7926]: I0216 21:11:20.881086 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557"} Feb 16 21:11:20.881776 master-0 kubenswrapper[7926]: I0216 21:11:20.881181 7926 scope.go:117] "RemoveContainer" containerID="9fbb3907b0a8154eba20d3a15a9c76d94a18ad3525cb12a7e4937b8969c5cb0d" Feb 16 21:11:20.881988 master-0 kubenswrapper[7926]: I0216 21:11:20.881936 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:11:20.882737 master-0 kubenswrapper[7926]: E0216 21:11:20.882302 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:11:22.908500 master-0 kubenswrapper[7926]: I0216 21:11:22.908422 7926 generic.go:334] "Generic (PLEG): container finished" podID="1677883f-bae2-4b6e-9dfe-683a6d26f2c5" containerID="b251b8636a6a11ccf532a9af9a8852c95e1a7cdd48031754c8a88d40620a2450" exitCode=0 Feb 16 21:11:22.908500 master-0 kubenswrapper[7926]: I0216 21:11:22.908506 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"1677883f-bae2-4b6e-9dfe-683a6d26f2c5","Type":"ContainerDied","Data":"b251b8636a6a11ccf532a9af9a8852c95e1a7cdd48031754c8a88d40620a2450"} Feb 16 21:11:24.224939 master-0 kubenswrapper[7926]: I0216 21:11:24.224885 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 21:11:24.288068 master-0 kubenswrapper[7926]: I0216 21:11:24.287951 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-var-lock\") pod \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " Feb 16 21:11:24.288329 master-0 kubenswrapper[7926]: I0216 21:11:24.288108 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kubelet-dir\") pod \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " Feb 16 21:11:24.288329 master-0 kubenswrapper[7926]: I0216 21:11:24.288118 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-var-lock" (OuterVolumeSpecName: "var-lock") pod "1677883f-bae2-4b6e-9dfe-683a6d26f2c5" (UID: "1677883f-bae2-4b6e-9dfe-683a6d26f2c5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:11:24.288329 master-0 kubenswrapper[7926]: I0216 21:11:24.288178 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kube-api-access\") pod \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\" (UID: \"1677883f-bae2-4b6e-9dfe-683a6d26f2c5\") " Feb 16 21:11:24.288329 master-0 kubenswrapper[7926]: I0216 21:11:24.288238 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1677883f-bae2-4b6e-9dfe-683a6d26f2c5" (UID: "1677883f-bae2-4b6e-9dfe-683a6d26f2c5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:11:24.289166 master-0 kubenswrapper[7926]: I0216 21:11:24.289066 7926 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:24.289252 master-0 kubenswrapper[7926]: I0216 21:11:24.289171 7926 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:24.292809 master-0 kubenswrapper[7926]: I0216 21:11:24.292770 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1677883f-bae2-4b6e-9dfe-683a6d26f2c5" (UID: "1677883f-bae2-4b6e-9dfe-683a6d26f2c5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:11:24.391217 master-0 kubenswrapper[7926]: I0216 21:11:24.391105 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1677883f-bae2-4b6e-9dfe-683a6d26f2c5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:24.921592 master-0 kubenswrapper[7926]: I0216 21:11:24.921505 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"1677883f-bae2-4b6e-9dfe-683a6d26f2c5","Type":"ContainerDied","Data":"7f9adda37238ede86f88cbac2c999b2aa463809256c6a93ac9e769608706a215"} Feb 16 21:11:24.921592 master-0 kubenswrapper[7926]: I0216 21:11:24.921579 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f9adda37238ede86f88cbac2c999b2aa463809256c6a93ac9e769608706a215" Feb 16 21:11:24.922246 master-0 kubenswrapper[7926]: I0216 21:11:24.922201 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 21:11:25.159157 master-0 kubenswrapper[7926]: E0216 21:11:25.158975 7926 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Feb 16 21:11:25.929901 master-0 kubenswrapper[7926]: I0216 21:11:25.929827 7926 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e" exitCode=1 Feb 16 21:11:25.929901 master-0 kubenswrapper[7926]: I0216 21:11:25.929892 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerDied","Data":"a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e"} Feb 16 21:11:25.930564 master-0 kubenswrapper[7926]: I0216 21:11:25.929936 7926 scope.go:117] "RemoveContainer" containerID="f06b93dc1f7853f1547eea454f40e687d56a498fbbe7a281e785547401b0538b" Feb 16 21:11:25.930615 master-0 kubenswrapper[7926]: I0216 21:11:25.930598 7926 scope.go:117] "RemoveContainer" containerID="a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e" Feb 16 21:11:25.980422 master-0 kubenswrapper[7926]: I0216 21:11:25.980378 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:11:25.981015 master-0 kubenswrapper[7926]: I0216 21:11:25.980983 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:11:25.981241 master-0 kubenswrapper[7926]: E0216 21:11:25.981210 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:11:26.711002 master-0 kubenswrapper[7926]: I0216 21:11:26.710928 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:11:26.937182 master-0 kubenswrapper[7926]: I0216 21:11:26.937060 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547"} Feb 16 21:11:26.937684 master-0 kubenswrapper[7926]: I0216 21:11:26.937422 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:11:26.937684 master-0 kubenswrapper[7926]: E0216 21:11:26.937631 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:11:26.988584 master-0 kubenswrapper[7926]: I0216 21:11:26.988447 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:11:27.949031 master-0 kubenswrapper[7926]: I0216 21:11:27.948965 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:11:27.951039 master-0 kubenswrapper[7926]: E0216 21:11:27.950485 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:11:32.983137 master-0 kubenswrapper[7926]: I0216 21:11:32.983006 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_7fc3abc9-3012-43bd-af84-fc65baf82801/installer/0.log" Feb 16 21:11:32.983137 master-0 kubenswrapper[7926]: I0216 21:11:32.983088 7926 generic.go:334] "Generic (PLEG): container finished" podID="7fc3abc9-3012-43bd-af84-fc65baf82801" containerID="7705ab1783cfe260a257da3d99d4c43b8aa6602286bbd8b5854c2a525ae4f204" exitCode=1 Feb 16 21:11:32.983137 master-0 kubenswrapper[7926]: I0216 21:11:32.983141 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7fc3abc9-3012-43bd-af84-fc65baf82801","Type":"ContainerDied","Data":"7705ab1783cfe260a257da3d99d4c43b8aa6602286bbd8b5854c2a525ae4f204"} Feb 16 21:11:34.283889 master-0 kubenswrapper[7926]: I0216 21:11:34.283807 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_7fc3abc9-3012-43bd-af84-fc65baf82801/installer/0.log" Feb 16 21:11:34.283889 master-0 kubenswrapper[7926]: I0216 21:11:34.283882 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:11:34.400078 master-0 kubenswrapper[7926]: I0216 21:11:34.399984 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-var-lock\") pod \"7fc3abc9-3012-43bd-af84-fc65baf82801\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " Feb 16 21:11:34.400078 master-0 kubenswrapper[7926]: I0216 21:11:34.400078 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fc3abc9-3012-43bd-af84-fc65baf82801-kube-api-access\") pod \"7fc3abc9-3012-43bd-af84-fc65baf82801\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " Feb 16 21:11:34.400438 master-0 kubenswrapper[7926]: I0216 21:11:34.400111 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-var-lock" (OuterVolumeSpecName: "var-lock") pod "7fc3abc9-3012-43bd-af84-fc65baf82801" (UID: "7fc3abc9-3012-43bd-af84-fc65baf82801"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:11:34.400438 master-0 kubenswrapper[7926]: I0216 21:11:34.400122 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-kubelet-dir\") pod \"7fc3abc9-3012-43bd-af84-fc65baf82801\" (UID: \"7fc3abc9-3012-43bd-af84-fc65baf82801\") " Feb 16 21:11:34.400438 master-0 kubenswrapper[7926]: I0216 21:11:34.400328 7926 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:34.400438 master-0 kubenswrapper[7926]: I0216 21:11:34.400383 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7fc3abc9-3012-43bd-af84-fc65baf82801" (UID: "7fc3abc9-3012-43bd-af84-fc65baf82801"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:11:34.402786 master-0 kubenswrapper[7926]: I0216 21:11:34.402704 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc3abc9-3012-43bd-af84-fc65baf82801-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7fc3abc9-3012-43bd-af84-fc65baf82801" (UID: "7fc3abc9-3012-43bd-af84-fc65baf82801"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:11:34.502061 master-0 kubenswrapper[7926]: I0216 21:11:34.501895 7926 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fc3abc9-3012-43bd-af84-fc65baf82801-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:34.502061 master-0 kubenswrapper[7926]: I0216 21:11:34.501963 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fc3abc9-3012-43bd-af84-fc65baf82801-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:34.997202 master-0 kubenswrapper[7926]: I0216 21:11:34.997145 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_7fc3abc9-3012-43bd-af84-fc65baf82801/installer/0.log" Feb 16 21:11:34.997417 master-0 kubenswrapper[7926]: I0216 21:11:34.997237 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7fc3abc9-3012-43bd-af84-fc65baf82801","Type":"ContainerDied","Data":"e18212da3ba9255cc13862af9e868f85f8caf8c7478800353ac7a39fbc390fa8"} Feb 16 21:11:34.997417 master-0 kubenswrapper[7926]: I0216 21:11:34.997283 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18212da3ba9255cc13862af9e868f85f8caf8c7478800353ac7a39fbc390fa8" Feb 16 21:11:34.997417 master-0 kubenswrapper[7926]: I0216 21:11:34.997343 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:11:35.159535 master-0 kubenswrapper[7926]: E0216 21:11:35.159397 7926 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:11:38.738458 master-0 kubenswrapper[7926]: I0216 21:11:38.738334 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:11:38.738993 master-0 kubenswrapper[7926]: E0216 21:11:38.738625 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:11:38.872574 master-0 kubenswrapper[7926]: I0216 21:11:38.872497 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 16 21:11:38.874018 master-0 kubenswrapper[7926]: I0216 21:11:38.873263 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 16 21:11:38.874018 master-0 kubenswrapper[7926]: I0216 21:11:38.873750 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd/0.log" Feb 16 21:11:38.874093 master-0 kubenswrapper[7926]: I0216 21:11:38.874067 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcdctl/0.log" Feb 16 21:11:38.874930 master-0 kubenswrapper[7926]: I0216 21:11:38.874902 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 21:11:38.999745 master-0 kubenswrapper[7926]: I0216 21:11:38.999619 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 21:11:38.999967 master-0 kubenswrapper[7926]: I0216 21:11:38.999823 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 21:11:38.999967 master-0 kubenswrapper[7926]: I0216 21:11:38.999860 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 21:11:38.999967 master-0 kubenswrapper[7926]: I0216 21:11:38.999888 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 21:11:38.999967 master-0 kubenswrapper[7926]: I0216 21:11:38.999911 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 21:11:38.999967 master-0 kubenswrapper[7926]: I0216 21:11:38.999882 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir" (OuterVolumeSpecName: "data-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:11:39.000125 master-0 kubenswrapper[7926]: I0216 21:11:38.999932 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") pod \"401699cb53e7098157e808a83125b0e4\" (UID: \"401699cb53e7098157e808a83125b0e4\") " Feb 16 21:11:39.000125 master-0 kubenswrapper[7926]: I0216 21:11:38.999997 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:11:39.000125 master-0 kubenswrapper[7926]: I0216 21:11:39.000031 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:11:39.000125 master-0 kubenswrapper[7926]: I0216 21:11:39.000038 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir" (OuterVolumeSpecName: "log-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:11:39.000125 master-0 kubenswrapper[7926]: I0216 21:11:39.000055 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:11:39.000125 master-0 kubenswrapper[7926]: I0216 21:11:39.000074 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "401699cb53e7098157e808a83125b0e4" (UID: "401699cb53e7098157e808a83125b0e4"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:11:39.000953 master-0 kubenswrapper[7926]: I0216 21:11:39.000901 7926 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:39.000953 master-0 kubenswrapper[7926]: I0216 21:11:39.000942 7926 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-log-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:39.000953 master-0 kubenswrapper[7926]: I0216 21:11:39.000958 7926 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:39.001099 master-0 kubenswrapper[7926]: I0216 21:11:39.000971 7926 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:39.001099 master-0 kubenswrapper[7926]: I0216 21:11:39.000984 7926 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:39.001099 master-0 kubenswrapper[7926]: I0216 21:11:39.000997 7926 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/401699cb53e7098157e808a83125b0e4-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:11:39.025468 master-0 kubenswrapper[7926]: I0216 21:11:39.025382 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-rev/0.log" Feb 16 21:11:39.027484 master-0 kubenswrapper[7926]: I0216 21:11:39.027405 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd-metrics/0.log" Feb 16 21:11:39.028769 master-0 kubenswrapper[7926]: I0216 21:11:39.028733 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcd/0.log" Feb 16 21:11:39.029943 master-0 kubenswrapper[7926]: I0216 21:11:39.029780 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_401699cb53e7098157e808a83125b0e4/etcdctl/0.log" Feb 16 21:11:39.032821 master-0 kubenswrapper[7926]: I0216 21:11:39.032100 7926 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809" exitCode=137 Feb 16 21:11:39.032821 master-0 kubenswrapper[7926]: I0216 21:11:39.032169 7926 generic.go:334] "Generic (PLEG): container finished" podID="401699cb53e7098157e808a83125b0e4" containerID="9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4" exitCode=137 Feb 16 21:11:39.032821 master-0 kubenswrapper[7926]: I0216 21:11:39.032235 7926 scope.go:117] "RemoveContainer" containerID="5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a" Feb 16 21:11:39.032821 master-0 kubenswrapper[7926]: I0216 21:11:39.032267 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 21:11:39.060345 master-0 kubenswrapper[7926]: I0216 21:11:39.060273 7926 scope.go:117] "RemoveContainer" containerID="6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21" Feb 16 21:11:39.085184 master-0 kubenswrapper[7926]: I0216 21:11:39.085131 7926 scope.go:117] "RemoveContainer" containerID="23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861" Feb 16 21:11:39.112474 master-0 kubenswrapper[7926]: I0216 21:11:39.112416 7926 scope.go:117] "RemoveContainer" containerID="dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809" Feb 16 21:11:39.140765 master-0 kubenswrapper[7926]: I0216 21:11:39.140709 7926 scope.go:117] "RemoveContainer" containerID="9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4" Feb 16 21:11:39.169129 master-0 kubenswrapper[7926]: I0216 21:11:39.169054 7926 scope.go:117] "RemoveContainer" containerID="3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0" Feb 16 21:11:39.205561 master-0 kubenswrapper[7926]: I0216 21:11:39.205495 7926 scope.go:117] "RemoveContainer" containerID="2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3" Feb 16 21:11:39.229781 master-0 kubenswrapper[7926]: I0216 21:11:39.229706 7926 scope.go:117] "RemoveContainer" containerID="1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8" Feb 16 21:11:39.262387 master-0 kubenswrapper[7926]: I0216 21:11:39.262312 7926 scope.go:117] "RemoveContainer" containerID="5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a" Feb 16 21:11:39.263040 master-0 kubenswrapper[7926]: E0216 21:11:39.262962 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a\": container with ID starting with 5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a not found: ID does not exist" containerID="5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a" Feb 16 21:11:39.263173 master-0 kubenswrapper[7926]: I0216 21:11:39.263034 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a"} err="failed to get container status \"5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a\": rpc error: code = NotFound desc = could not find container \"5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a\": container with ID starting with 5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a not found: ID does not exist" Feb 16 21:11:39.263173 master-0 kubenswrapper[7926]: I0216 21:11:39.263072 7926 scope.go:117] "RemoveContainer" containerID="6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21" Feb 16 21:11:39.264090 master-0 kubenswrapper[7926]: E0216 21:11:39.264036 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21\": container with ID starting with 6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21 not found: ID does not exist" containerID="6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21" Feb 16 21:11:39.264090 master-0 kubenswrapper[7926]: I0216 21:11:39.264072 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21"} err="failed to get container status \"6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21\": rpc error: code = NotFound desc = could not find container \"6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21\": container with ID starting with 6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21 not found: ID does not exist" Feb 16 21:11:39.264090 master-0 kubenswrapper[7926]: I0216 21:11:39.264095 7926 scope.go:117] "RemoveContainer" containerID="23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861" Feb 16 21:11:39.264595 master-0 kubenswrapper[7926]: E0216 21:11:39.264541 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861\": container with ID starting with 23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861 not found: ID does not exist" containerID="23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861" Feb 16 21:11:39.264712 master-0 kubenswrapper[7926]: I0216 21:11:39.264586 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861"} err="failed to get container status \"23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861\": rpc error: code = NotFound desc = could not find container \"23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861\": container with ID starting with 23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861 not found: ID does not exist" Feb 16 21:11:39.264712 master-0 kubenswrapper[7926]: I0216 21:11:39.264614 7926 scope.go:117] "RemoveContainer" containerID="dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809" Feb 16 21:11:39.265056 master-0 kubenswrapper[7926]: E0216 21:11:39.264995 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809\": container with ID starting with dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809 not found: ID does not exist" containerID="dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809" Feb 16 21:11:39.265056 master-0 kubenswrapper[7926]: I0216 21:11:39.265031 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809"} err="failed to get container status \"dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809\": rpc error: code = NotFound desc = could not find container \"dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809\": container with ID starting with dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809 not found: ID does not exist" Feb 16 21:11:39.265056 master-0 kubenswrapper[7926]: I0216 21:11:39.265045 7926 scope.go:117] "RemoveContainer" containerID="9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4" Feb 16 21:11:39.265645 master-0 kubenswrapper[7926]: E0216 21:11:39.265551 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4\": container with ID starting with 9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4 not found: ID does not exist" containerID="9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4" Feb 16 21:11:39.265772 master-0 kubenswrapper[7926]: I0216 21:11:39.265693 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4"} err="failed to get container status \"9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4\": rpc error: code = NotFound desc = could not find container \"9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4\": container with ID starting with 9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4 not found: ID does not exist" Feb 16 21:11:39.265772 master-0 kubenswrapper[7926]: I0216 21:11:39.265757 7926 scope.go:117] "RemoveContainer" containerID="3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0" Feb 16 21:11:39.266448 master-0 kubenswrapper[7926]: E0216 21:11:39.266348 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0\": container with ID starting with 3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0 not found: ID does not exist" containerID="3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0" Feb 16 21:11:39.266448 master-0 kubenswrapper[7926]: I0216 21:11:39.266381 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0"} err="failed to get container status \"3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0\": rpc error: code = NotFound desc = could not find container \"3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0\": container with ID starting with 3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0 not found: ID does not exist" Feb 16 21:11:39.266448 master-0 kubenswrapper[7926]: I0216 21:11:39.266398 7926 scope.go:117] "RemoveContainer" containerID="2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3" Feb 16 21:11:39.267301 master-0 kubenswrapper[7926]: E0216 21:11:39.267243 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3\": container with ID starting with 2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3 not found: ID does not exist" containerID="2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3" Feb 16 21:11:39.267301 master-0 kubenswrapper[7926]: I0216 21:11:39.267288 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3"} err="failed to get container status \"2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3\": rpc error: code = NotFound desc = could not find container \"2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3\": container with ID starting with 2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3 not found: ID does not exist" Feb 16 21:11:39.267466 master-0 kubenswrapper[7926]: I0216 21:11:39.267309 7926 scope.go:117] "RemoveContainer" containerID="1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8" Feb 16 21:11:39.267807 master-0 kubenswrapper[7926]: E0216 21:11:39.267746 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8\": container with ID starting with 1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8 not found: ID does not exist" containerID="1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8" Feb 16 21:11:39.267906 master-0 kubenswrapper[7926]: I0216 21:11:39.267804 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8"} err="failed to get container status \"1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8\": rpc error: code = NotFound desc = could not find container \"1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8\": container with ID starting with 1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8 not found: ID does not exist" Feb 16 21:11:39.267906 master-0 kubenswrapper[7926]: I0216 21:11:39.267840 7926 scope.go:117] "RemoveContainer" containerID="5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a" Feb 16 21:11:39.268343 master-0 kubenswrapper[7926]: I0216 21:11:39.268286 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a"} err="failed to get container status \"5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a\": rpc error: code = NotFound desc = could not find container \"5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a\": container with ID starting with 5c6e80046b275f770bc256074b43bbe1b3c4f6774535b0d65b124406c5160f0a not found: ID does not exist" Feb 16 21:11:39.268343 master-0 kubenswrapper[7926]: I0216 21:11:39.268322 7926 scope.go:117] "RemoveContainer" containerID="6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21" Feb 16 21:11:39.268756 master-0 kubenswrapper[7926]: I0216 21:11:39.268701 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21"} err="failed to get container status \"6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21\": rpc error: code = NotFound desc = could not find container \"6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21\": container with ID starting with 6ad2010c95be4c9f2fa28ed52b05973b2b48bc9db8a6e7134941e0ed2ebcaa21 not found: ID does not exist" Feb 16 21:11:39.268756 master-0 kubenswrapper[7926]: I0216 21:11:39.268743 7926 scope.go:117] "RemoveContainer" containerID="23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861" Feb 16 21:11:39.269446 master-0 kubenswrapper[7926]: I0216 21:11:39.269347 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861"} err="failed to get container status \"23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861\": rpc error: code = NotFound desc = could not find container \"23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861\": container with ID starting with 23d9477d22a2c28e4a6024fc5b51d1b2e8b1bea2df627714860f39a7a51c3861 not found: ID does not exist" Feb 16 21:11:39.269533 master-0 kubenswrapper[7926]: I0216 21:11:39.269451 7926 scope.go:117] "RemoveContainer" containerID="dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809" Feb 16 21:11:39.269989 master-0 kubenswrapper[7926]: I0216 21:11:39.269932 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809"} err="failed to get container status \"dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809\": rpc error: code = NotFound desc = could not find container \"dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809\": container with ID starting with dc3bdb2a8bb5b307357d9efc772993cd3c2bd4dc109a42b135a10a430b790809 not found: ID does not exist" Feb 16 21:11:39.269989 master-0 kubenswrapper[7926]: I0216 21:11:39.269982 7926 scope.go:117] "RemoveContainer" containerID="9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4" Feb 16 21:11:39.270565 master-0 kubenswrapper[7926]: I0216 21:11:39.270511 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4"} err="failed to get container status \"9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4\": rpc error: code = NotFound desc = could not find container \"9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4\": container with ID starting with 9ce83587f89564053d65e499eb053c5a968bf50fe44edcf704a3f564f2872da4 not found: ID does not exist" Feb 16 21:11:39.270565 master-0 kubenswrapper[7926]: I0216 21:11:39.270548 7926 scope.go:117] "RemoveContainer" containerID="3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0" Feb 16 21:11:39.271153 master-0 kubenswrapper[7926]: I0216 21:11:39.271079 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0"} err="failed to get container status \"3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0\": rpc error: code = NotFound desc = could not find container \"3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0\": container with ID starting with 3066c42f5ef5c95f3661c05c7da3598358a0986a6a070d0d54c575cd6a3f75f0 not found: ID does not exist" Feb 16 21:11:39.271153 master-0 kubenswrapper[7926]: I0216 21:11:39.271138 7926 scope.go:117] "RemoveContainer" containerID="2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3" Feb 16 21:11:39.271714 master-0 kubenswrapper[7926]: I0216 21:11:39.271625 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3"} err="failed to get container status \"2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3\": rpc error: code = NotFound desc = could not find container \"2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3\": container with ID starting with 2c898903534a5f988f1749dcd6c1e5b9207da73639c9cd5e05f502774c7b05c3 not found: ID does not exist" Feb 16 21:11:39.271806 master-0 kubenswrapper[7926]: I0216 21:11:39.271707 7926 scope.go:117] "RemoveContainer" containerID="1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8" Feb 16 21:11:39.272161 master-0 kubenswrapper[7926]: I0216 21:11:39.272112 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8"} err="failed to get container status \"1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8\": rpc error: code = NotFound desc = could not find container \"1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8\": container with ID starting with 1f09bc4164b16ef8a6fca51ee723083d342d68f035a16887f27e064b58ed2ed8 not found: ID does not exist" Feb 16 21:11:40.753998 master-0 kubenswrapper[7926]: I0216 21:11:40.753874 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="401699cb53e7098157e808a83125b0e4" path="/var/lib/kubelet/pods/401699cb53e7098157e808a83125b0e4/volumes" Feb 16 21:11:45.160903 master-0 kubenswrapper[7926]: E0216 21:11:45.160798 7926 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:11:47.738091 master-0 kubenswrapper[7926]: I0216 21:11:47.737978 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 21:11:47.759988 master-0 kubenswrapper[7926]: I0216 21:11:47.759887 7926 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:11:47.759988 master-0 kubenswrapper[7926]: I0216 21:11:47.759973 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:11:48.646677 master-0 kubenswrapper[7926]: E0216 21:11:48.646375 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-canary-l44qd.1894d65445f2c0eb openshift-ingress-canary 11370 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-canary,Name:ingress-canary-l44qd,UID:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b,APIVersion:v1,ResourceVersion:11221,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"cert\" : secret \"canary-serving-cert\" not found,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 21:10:10 +0000 UTC,LastTimestamp:2026-02-16 21:11:14.642765166 +0000 UTC m=+846.277665486,Count:8,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:11:51.738575 master-0 kubenswrapper[7926]: I0216 21:11:51.738465 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:11:51.740103 master-0 kubenswrapper[7926]: E0216 21:11:51.738923 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:11:55.161832 master-0 kubenswrapper[7926]: E0216 21:11:55.161712 7926 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:11:58.189984 master-0 kubenswrapper[7926]: I0216 21:11:58.189924 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-tpj6f_88f19cea-60ed-4977-a906-75deec51fc3d/approver/1.log" Feb 16 21:11:58.191137 master-0 kubenswrapper[7926]: I0216 21:11:58.190758 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-tpj6f_88f19cea-60ed-4977-a906-75deec51fc3d/approver/0.log" Feb 16 21:11:58.191307 master-0 kubenswrapper[7926]: I0216 21:11:58.191266 7926 generic.go:334] "Generic (PLEG): container finished" podID="88f19cea-60ed-4977-a906-75deec51fc3d" containerID="035e7d01b329ab00b5fb0dd3b6a5b55ee6bd504dee86517456bdcc1b06cd6e19" exitCode=1 Feb 16 21:11:58.191382 master-0 kubenswrapper[7926]: I0216 21:11:58.191313 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-tpj6f" event={"ID":"88f19cea-60ed-4977-a906-75deec51fc3d","Type":"ContainerDied","Data":"035e7d01b329ab00b5fb0dd3b6a5b55ee6bd504dee86517456bdcc1b06cd6e19"} Feb 16 21:11:58.191382 master-0 kubenswrapper[7926]: I0216 21:11:58.191352 7926 scope.go:117] "RemoveContainer" containerID="d0734d0596c43a54e8c5763783b157c38da058f6ee7d80add1702898fd0efe5d" Feb 16 21:11:58.192395 master-0 kubenswrapper[7926]: I0216 21:11:58.192340 7926 scope.go:117] "RemoveContainer" containerID="035e7d01b329ab00b5fb0dd3b6a5b55ee6bd504dee86517456bdcc1b06cd6e19" Feb 16 21:11:59.201525 master-0 kubenswrapper[7926]: I0216 21:11:59.201495 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-tpj6f_88f19cea-60ed-4977-a906-75deec51fc3d/approver/1.log" Feb 16 21:11:59.202693 master-0 kubenswrapper[7926]: I0216 21:11:59.202662 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-tpj6f" event={"ID":"88f19cea-60ed-4977-a906-75deec51fc3d","Type":"ContainerStarted","Data":"d9983e5644ba5577e1eefab6fb7488cd7e2a9580d6b33554cb3e17eb89d03fd5"} Feb 16 21:12:01.216221 master-0 kubenswrapper[7926]: I0216 21:12:01.216148 7926 generic.go:334] "Generic (PLEG): container finished" podID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerID="922b3b9a2ab72ca8bb93946974e3710fc89f41db642b5f99391c37114b12712f" exitCode=0 Feb 16 21:12:01.216221 master-0 kubenswrapper[7926]: I0216 21:12:01.216208 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerDied","Data":"922b3b9a2ab72ca8bb93946974e3710fc89f41db642b5f99391c37114b12712f"} Feb 16 21:12:01.217020 master-0 kubenswrapper[7926]: I0216 21:12:01.216249 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerStarted","Data":"f1ed58b2ccf00425ebf16fa5a6dffc055e3422108b96a5f2732ff92f9613603a"} Feb 16 21:12:01.217020 master-0 kubenswrapper[7926]: I0216 21:12:01.216269 7926 scope.go:117] "RemoveContainer" containerID="822e5a1c9a45bb991d7b382a67465c6dbc014dbe9cfde42d7e3116d883653d76" Feb 16 21:12:02.182565 master-0 kubenswrapper[7926]: I0216 21:12:02.182466 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:12:02.186058 master-0 kubenswrapper[7926]: I0216 21:12:02.185978 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:02.186058 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:02.186058 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:02.186058 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:02.186388 master-0 kubenswrapper[7926]: I0216 21:12:02.186074 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:03.185800 master-0 kubenswrapper[7926]: I0216 21:12:03.185731 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:03.185800 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:03.185800 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:03.185800 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:03.185800 master-0 kubenswrapper[7926]: I0216 21:12:03.185790 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:04.185494 master-0 kubenswrapper[7926]: I0216 21:12:04.185391 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:04.185494 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:04.185494 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:04.185494 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:04.186257 master-0 kubenswrapper[7926]: I0216 21:12:04.185505 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:05.162907 master-0 kubenswrapper[7926]: E0216 21:12:05.162795 7926 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:12:05.163421 master-0 kubenswrapper[7926]: I0216 21:12:05.163377 7926 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 21:12:05.182394 master-0 kubenswrapper[7926]: I0216 21:12:05.182329 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:12:05.184989 master-0 kubenswrapper[7926]: I0216 21:12:05.184916 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:05.184989 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:05.184989 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:05.184989 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:05.185191 master-0 kubenswrapper[7926]: I0216 21:12:05.185001 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:06.185687 master-0 kubenswrapper[7926]: I0216 21:12:06.185602 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:06.185687 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:06.185687 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:06.185687 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:06.186750 master-0 kubenswrapper[7926]: I0216 21:12:06.186702 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:06.738721 master-0 kubenswrapper[7926]: I0216 21:12:06.738624 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:12:06.739126 master-0 kubenswrapper[7926]: E0216 21:12:06.739084 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:12:07.185931 master-0 kubenswrapper[7926]: I0216 21:12:07.185859 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:07.185931 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:07.185931 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:07.185931 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:07.186992 master-0 kubenswrapper[7926]: I0216 21:12:07.185940 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:08.185436 master-0 kubenswrapper[7926]: I0216 21:12:08.185374 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:08.185436 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:08.185436 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:08.185436 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:08.185821 master-0 kubenswrapper[7926]: I0216 21:12:08.185439 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:08.743912 master-0 kubenswrapper[7926]: I0216 21:12:08.743800 7926 status_manager.go:851] "Failed to get status for pod" podUID="401699cb53e7098157e808a83125b0e4" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Feb 16 21:12:09.185067 master-0 kubenswrapper[7926]: I0216 21:12:09.184956 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:09.185067 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:09.185067 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:09.185067 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:09.185277 master-0 kubenswrapper[7926]: I0216 21:12:09.185132 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:10.186524 master-0 kubenswrapper[7926]: I0216 21:12:10.186425 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:10.186524 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:10.186524 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:10.186524 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:10.187725 master-0 kubenswrapper[7926]: I0216 21:12:10.186534 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:11.186398 master-0 kubenswrapper[7926]: I0216 21:12:11.186218 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:11.186398 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:11.186398 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:11.186398 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:11.186398 master-0 kubenswrapper[7926]: I0216 21:12:11.186392 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:12.185550 master-0 kubenswrapper[7926]: I0216 21:12:12.185467 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:12.185550 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:12.185550 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:12.185550 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:12.185550 master-0 kubenswrapper[7926]: I0216 21:12:12.185548 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:13.185999 master-0 kubenswrapper[7926]: I0216 21:12:13.185912 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:13.185999 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:13.185999 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:13.185999 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:13.185999 master-0 kubenswrapper[7926]: I0216 21:12:13.185982 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:13.687007 master-0 kubenswrapper[7926]: E0216 21:12:13.686915 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-l44qd" podUID="0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b" Feb 16 21:12:14.185289 master-0 kubenswrapper[7926]: I0216 21:12:14.185215 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:14.185289 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:14.185289 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:14.185289 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:14.185544 master-0 kubenswrapper[7926]: I0216 21:12:14.185319 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:14.311686 master-0 kubenswrapper[7926]: I0216 21:12:14.311559 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:12:15.164044 master-0 kubenswrapper[7926]: E0216 21:12:15.163930 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 16 21:12:15.184892 master-0 kubenswrapper[7926]: I0216 21:12:15.184817 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:15.184892 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:15.184892 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:15.184892 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:15.185261 master-0 kubenswrapper[7926]: I0216 21:12:15.184914 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:16.185751 master-0 kubenswrapper[7926]: I0216 21:12:16.185674 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:16.185751 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:16.185751 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:16.185751 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:16.186934 master-0 kubenswrapper[7926]: I0216 21:12:16.186848 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:17.185303 master-0 kubenswrapper[7926]: I0216 21:12:17.185196 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:17.185303 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:17.185303 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:17.185303 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:17.185761 master-0 kubenswrapper[7926]: I0216 21:12:17.185339 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:18.186196 master-0 kubenswrapper[7926]: I0216 21:12:18.186072 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:18.186196 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:18.186196 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:18.186196 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:18.187093 master-0 kubenswrapper[7926]: I0216 21:12:18.186213 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:18.742291 master-0 kubenswrapper[7926]: I0216 21:12:18.742203 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:12:18.742921 master-0 kubenswrapper[7926]: E0216 21:12:18.742474 7926 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 21:12:18.742921 master-0 kubenswrapper[7926]: E0216 21:12:18.742578 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert podName:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b nodeName:}" failed. No retries permitted until 2026-02-16 21:14:20.742550768 +0000 UTC m=+1032.377451108 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert") pod "ingress-canary-l44qd" (UID: "0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b") : secret "canary-serving-cert" not found Feb 16 21:12:19.186209 master-0 kubenswrapper[7926]: I0216 21:12:19.186131 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:19.186209 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:19.186209 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:19.186209 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:19.187063 master-0 kubenswrapper[7926]: I0216 21:12:19.186229 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:20.185165 master-0 kubenswrapper[7926]: I0216 21:12:20.185056 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:20.185165 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:20.185165 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:20.185165 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:20.185495 master-0 kubenswrapper[7926]: I0216 21:12:20.185188 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:20.738411 master-0 kubenswrapper[7926]: I0216 21:12:20.738351 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:12:20.739126 master-0 kubenswrapper[7926]: E0216 21:12:20.738583 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:12:21.187002 master-0 kubenswrapper[7926]: I0216 21:12:21.186898 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:21.187002 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:21.187002 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:21.187002 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:21.187521 master-0 kubenswrapper[7926]: I0216 21:12:21.187030 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:21.763905 master-0 kubenswrapper[7926]: E0216 21:12:21.763858 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 21:12:21.764732 master-0 kubenswrapper[7926]: I0216 21:12:21.764395 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 21:12:21.785051 master-0 kubenswrapper[7926]: W0216 21:12:21.784981 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7adecad495595c43c57c30abd350e987.slice/crio-611833cac10a2c7b92f524745bb3d40c37badfe83dfcc13e97aefe053823dfb9 WatchSource:0}: Error finding container 611833cac10a2c7b92f524745bb3d40c37badfe83dfcc13e97aefe053823dfb9: Status 404 returned error can't find the container with id 611833cac10a2c7b92f524745bb3d40c37badfe83dfcc13e97aefe053823dfb9 Feb 16 21:12:22.184891 master-0 kubenswrapper[7926]: I0216 21:12:22.184829 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:22.184891 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:22.184891 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:22.184891 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:22.185174 master-0 kubenswrapper[7926]: I0216 21:12:22.184895 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:22.372745 master-0 kubenswrapper[7926]: I0216 21:12:22.372576 7926 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="c4633b0b299cd40e037bf321ae06c8806fedc4001bb393b919fc921dc3fe2902" exitCode=0 Feb 16 21:12:22.372940 master-0 kubenswrapper[7926]: I0216 21:12:22.372732 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"c4633b0b299cd40e037bf321ae06c8806fedc4001bb393b919fc921dc3fe2902"} Feb 16 21:12:22.372940 master-0 kubenswrapper[7926]: I0216 21:12:22.372835 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"611833cac10a2c7b92f524745bb3d40c37badfe83dfcc13e97aefe053823dfb9"} Feb 16 21:12:22.373491 master-0 kubenswrapper[7926]: I0216 21:12:22.373441 7926 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:12:22.373534 master-0 kubenswrapper[7926]: I0216 21:12:22.373494 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:12:22.649469 master-0 kubenswrapper[7926]: E0216 21:12:22.649181 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d5ea17a6f087 kube-system 9666 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 21:02:34 +0000 UTC,LastTimestamp:2026-02-16 21:11:20.882260944 +0000 UTC m=+852.517161254,Count:13,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:12:23.186002 master-0 kubenswrapper[7926]: I0216 21:12:23.185932 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:23.186002 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:23.186002 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:23.186002 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:23.186923 master-0 kubenswrapper[7926]: I0216 21:12:23.186848 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:24.186944 master-0 kubenswrapper[7926]: I0216 21:12:24.186845 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:24.186944 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:24.186944 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:24.186944 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:24.187720 master-0 kubenswrapper[7926]: I0216 21:12:24.186963 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:25.186348 master-0 kubenswrapper[7926]: I0216 21:12:25.186224 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:25.186348 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:25.186348 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:25.186348 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:25.186974 master-0 kubenswrapper[7926]: I0216 21:12:25.186371 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:25.365119 master-0 kubenswrapper[7926]: E0216 21:12:25.364976 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 16 21:12:26.186550 master-0 kubenswrapper[7926]: I0216 21:12:26.186324 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:26.186550 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:26.186550 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:26.186550 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:26.186550 master-0 kubenswrapper[7926]: I0216 21:12:26.186445 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:26.410238 master-0 kubenswrapper[7926]: I0216 21:12:26.410128 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/4.log" Feb 16 21:12:26.411614 master-0 kubenswrapper[7926]: I0216 21:12:26.411552 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/3.log" Feb 16 21:12:26.412487 master-0 kubenswrapper[7926]: I0216 21:12:26.412407 7926 generic.go:334] "Generic (PLEG): container finished" podID="cef33294-81fb-41a2-811d-2565f94514d1" containerID="4007378c35279e107179280f5b478a33e451c6d5ec64c7c97a91228d94179cd2" exitCode=1 Feb 16 21:12:26.412598 master-0 kubenswrapper[7926]: I0216 21:12:26.412477 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerDied","Data":"4007378c35279e107179280f5b478a33e451c6d5ec64c7c97a91228d94179cd2"} Feb 16 21:12:26.412841 master-0 kubenswrapper[7926]: I0216 21:12:26.412603 7926 scope.go:117] "RemoveContainer" containerID="653d95653081a7f3f8351ba7eaf8e2a8cf9f5394f19ac7bd13b4a971322691eb" Feb 16 21:12:26.413855 master-0 kubenswrapper[7926]: I0216 21:12:26.413790 7926 scope.go:117] "RemoveContainer" containerID="4007378c35279e107179280f5b478a33e451c6d5ec64c7c97a91228d94179cd2" Feb 16 21:12:26.415098 master-0 kubenswrapper[7926]: E0216 21:12:26.415024 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:12:27.185494 master-0 kubenswrapper[7926]: I0216 21:12:27.185392 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:27.185494 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:27.185494 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:27.185494 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:27.185494 master-0 kubenswrapper[7926]: I0216 21:12:27.185468 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:27.425104 master-0 kubenswrapper[7926]: I0216 21:12:27.425028 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/4.log" Feb 16 21:12:28.185989 master-0 kubenswrapper[7926]: I0216 21:12:28.185879 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:28.185989 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:28.185989 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:28.185989 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:28.185989 master-0 kubenswrapper[7926]: I0216 21:12:28.185972 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:29.186198 master-0 kubenswrapper[7926]: I0216 21:12:29.186140 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:29.186198 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:29.186198 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:29.186198 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:29.186759 master-0 kubenswrapper[7926]: I0216 21:12:29.186238 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:30.186819 master-0 kubenswrapper[7926]: I0216 21:12:30.186732 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:30.186819 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:30.186819 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:30.186819 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:30.187429 master-0 kubenswrapper[7926]: I0216 21:12:30.186829 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:31.186847 master-0 kubenswrapper[7926]: I0216 21:12:31.186728 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:31.186847 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:31.186847 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:31.186847 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:31.187915 master-0 kubenswrapper[7926]: I0216 21:12:31.186853 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:32.185885 master-0 kubenswrapper[7926]: I0216 21:12:32.185802 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:32.185885 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:32.185885 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:32.185885 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:32.186162 master-0 kubenswrapper[7926]: I0216 21:12:32.185891 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:33.185284 master-0 kubenswrapper[7926]: I0216 21:12:33.185175 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:33.185284 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:33.185284 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:33.185284 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:33.186193 master-0 kubenswrapper[7926]: I0216 21:12:33.185313 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:34.186225 master-0 kubenswrapper[7926]: I0216 21:12:34.186058 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:34.186225 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:34.186225 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:34.186225 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:34.187268 master-0 kubenswrapper[7926]: I0216 21:12:34.186252 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:35.186296 master-0 kubenswrapper[7926]: I0216 21:12:35.186200 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:35.186296 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:35.186296 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:35.186296 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:35.187021 master-0 kubenswrapper[7926]: I0216 21:12:35.186292 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:35.739036 master-0 kubenswrapper[7926]: I0216 21:12:35.738892 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:12:35.739375 master-0 kubenswrapper[7926]: E0216 21:12:35.739344 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:12:35.767367 master-0 kubenswrapper[7926]: E0216 21:12:35.767211 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 16 21:12:36.186284 master-0 kubenswrapper[7926]: I0216 21:12:36.186204 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:36.186284 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:36.186284 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:36.186284 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:36.186892 master-0 kubenswrapper[7926]: I0216 21:12:36.186302 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:37.185701 master-0 kubenswrapper[7926]: I0216 21:12:37.185598 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:37.185701 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:37.185701 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:37.185701 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:37.185701 master-0 kubenswrapper[7926]: I0216 21:12:37.185710 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:38.185019 master-0 kubenswrapper[7926]: I0216 21:12:38.184944 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:38.185019 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:38.185019 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:38.185019 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:38.185019 master-0 kubenswrapper[7926]: I0216 21:12:38.184996 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:38.498255 master-0 kubenswrapper[7926]: E0216 21:12:38.498111 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:12:28Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:12:28Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:12:28Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:12:28Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:12:39.186335 master-0 kubenswrapper[7926]: I0216 21:12:39.185599 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:39.186335 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:39.186335 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:39.186335 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:39.186335 master-0 kubenswrapper[7926]: I0216 21:12:39.185682 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:39.506537 master-0 kubenswrapper[7926]: E0216 21:12:39.506395 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" podUID="a0b7a368-1408-4fc3-ae25-4613b74e7fca" Feb 16 21:12:39.586475 master-0 kubenswrapper[7926]: I0216 21:12:39.586387 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:12:40.186170 master-0 kubenswrapper[7926]: I0216 21:12:40.186074 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:40.186170 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:40.186170 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:40.186170 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:40.186170 master-0 kubenswrapper[7926]: I0216 21:12:40.186145 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:41.184636 master-0 kubenswrapper[7926]: I0216 21:12:41.184558 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:41.184636 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:41.184636 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:41.184636 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:41.184959 master-0 kubenswrapper[7926]: I0216 21:12:41.184688 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:41.318551 master-0 kubenswrapper[7926]: I0216 21:12:41.318462 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:12:41.319349 master-0 kubenswrapper[7926]: E0216 21:12:41.318714 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:12:41.319349 master-0 kubenswrapper[7926]: E0216 21:12:41.318809 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:14:43.318775996 +0000 UTC m=+1054.953676336 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:12:41.738804 master-0 kubenswrapper[7926]: I0216 21:12:41.738617 7926 scope.go:117] "RemoveContainer" containerID="4007378c35279e107179280f5b478a33e451c6d5ec64c7c97a91228d94179cd2" Feb 16 21:12:41.739241 master-0 kubenswrapper[7926]: E0216 21:12:41.739168 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:12:42.184970 master-0 kubenswrapper[7926]: I0216 21:12:42.184882 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:42.184970 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:42.184970 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:42.184970 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:42.185323 master-0 kubenswrapper[7926]: I0216 21:12:42.184983 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:43.184710 master-0 kubenswrapper[7926]: I0216 21:12:43.184565 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:43.184710 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:43.184710 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:43.184710 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:43.185634 master-0 kubenswrapper[7926]: I0216 21:12:43.184700 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:44.186113 master-0 kubenswrapper[7926]: I0216 21:12:44.186006 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:44.186113 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:44.186113 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:44.186113 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:44.186113 master-0 kubenswrapper[7926]: I0216 21:12:44.186107 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:45.185390 master-0 kubenswrapper[7926]: I0216 21:12:45.185329 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:45.185390 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:45.185390 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:45.185390 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:45.186001 master-0 kubenswrapper[7926]: I0216 21:12:45.185956 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:46.185951 master-0 kubenswrapper[7926]: I0216 21:12:46.185850 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:46.185951 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:46.185951 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:46.185951 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:46.185951 master-0 kubenswrapper[7926]: I0216 21:12:46.185926 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:46.569280 master-0 kubenswrapper[7926]: E0216 21:12:46.569080 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 16 21:12:47.185700 master-0 kubenswrapper[7926]: I0216 21:12:47.185597 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:47.185700 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:47.185700 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:47.185700 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:47.186508 master-0 kubenswrapper[7926]: I0216 21:12:47.185758 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:48.186428 master-0 kubenswrapper[7926]: I0216 21:12:48.186352 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:48.186428 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:48.186428 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:48.186428 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:48.187385 master-0 kubenswrapper[7926]: I0216 21:12:48.186432 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:48.498584 master-0 kubenswrapper[7926]: E0216 21:12:48.498483 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:12:49.186079 master-0 kubenswrapper[7926]: I0216 21:12:49.185970 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:49.186079 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:49.186079 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:49.186079 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:49.186079 master-0 kubenswrapper[7926]: I0216 21:12:49.186057 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:50.186078 master-0 kubenswrapper[7926]: I0216 21:12:50.185960 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:50.186078 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:50.186078 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:50.186078 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:50.186078 master-0 kubenswrapper[7926]: I0216 21:12:50.186071 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:50.739046 master-0 kubenswrapper[7926]: I0216 21:12:50.738969 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:12:50.739452 master-0 kubenswrapper[7926]: E0216 21:12:50.739394 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:12:51.185536 master-0 kubenswrapper[7926]: I0216 21:12:51.185484 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:51.185536 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:51.185536 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:51.185536 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:51.186304 master-0 kubenswrapper[7926]: I0216 21:12:51.186245 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:52.185688 master-0 kubenswrapper[7926]: I0216 21:12:52.185081 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:52.185688 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:52.185688 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:52.185688 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:52.185688 master-0 kubenswrapper[7926]: I0216 21:12:52.185171 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:53.185404 master-0 kubenswrapper[7926]: I0216 21:12:53.185338 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:53.185404 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:53.185404 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:53.185404 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:53.185758 master-0 kubenswrapper[7926]: I0216 21:12:53.185440 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:54.185008 master-0 kubenswrapper[7926]: I0216 21:12:54.184933 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:54.185008 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:54.185008 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:54.185008 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:54.185513 master-0 kubenswrapper[7926]: I0216 21:12:54.185122 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:55.185880 master-0 kubenswrapper[7926]: I0216 21:12:55.185747 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:55.185880 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:55.185880 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:55.185880 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:55.185880 master-0 kubenswrapper[7926]: I0216 21:12:55.185878 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:55.738342 master-0 kubenswrapper[7926]: I0216 21:12:55.738286 7926 scope.go:117] "RemoveContainer" containerID="4007378c35279e107179280f5b478a33e451c6d5ec64c7c97a91228d94179cd2" Feb 16 21:12:55.738689 master-0 kubenswrapper[7926]: E0216 21:12:55.738505 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:12:56.185587 master-0 kubenswrapper[7926]: I0216 21:12:56.185518 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:56.185587 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:56.185587 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:56.185587 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:56.186259 master-0 kubenswrapper[7926]: I0216 21:12:56.185608 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:56.376842 master-0 kubenswrapper[7926]: E0216 21:12:56.376766 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 21:12:56.652260 master-0 kubenswrapper[7926]: E0216 21:12:56.652109 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894d5af6553ccfd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:22.759431421 +0000 UTC m=+74.394331751,LastTimestamp:2026-02-16 21:11:25.932216491 +0000 UTC m=+857.567116791,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:12:56.726587 master-0 kubenswrapper[7926]: I0216 21:12:56.726499 7926 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="3478db789e9371b7e1a20de102750814fbff190dbf9776351e2f462d389fbe58" exitCode=0 Feb 16 21:12:56.726587 master-0 kubenswrapper[7926]: I0216 21:12:56.726566 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"3478db789e9371b7e1a20de102750814fbff190dbf9776351e2f462d389fbe58"} Feb 16 21:12:56.726964 master-0 kubenswrapper[7926]: I0216 21:12:56.726932 7926 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:12:56.726964 master-0 kubenswrapper[7926]: I0216 21:12:56.726956 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:12:57.184952 master-0 kubenswrapper[7926]: I0216 21:12:57.184806 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:57.184952 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:57.184952 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:57.184952 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:57.184952 master-0 kubenswrapper[7926]: I0216 21:12:57.184888 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:58.170859 master-0 kubenswrapper[7926]: E0216 21:12:58.170455 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 16 21:12:58.189754 master-0 kubenswrapper[7926]: I0216 21:12:58.189663 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:58.189754 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:58.189754 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:58.189754 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:58.189754 master-0 kubenswrapper[7926]: I0216 21:12:58.189747 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:58.499257 master-0 kubenswrapper[7926]: E0216 21:12:58.499169 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:12:59.185774 master-0 kubenswrapper[7926]: I0216 21:12:59.185627 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:12:59.185774 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:12:59.185774 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:12:59.185774 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:12:59.185774 master-0 kubenswrapper[7926]: I0216 21:12:59.185723 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:12:59.748130 master-0 kubenswrapper[7926]: I0216 21:12:59.748060 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-qzs2g_1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/manager/1.log" Feb 16 21:12:59.749245 master-0 kubenswrapper[7926]: I0216 21:12:59.749219 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-qzs2g_1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/manager/0.log" Feb 16 21:12:59.749300 master-0 kubenswrapper[7926]: I0216 21:12:59.749277 7926 generic.go:334] "Generic (PLEG): container finished" podID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerID="073bfd97b3802cf7e422558b7f0d96ac1c7a887d6a785fb5000fa99850a0b06e" exitCode=1 Feb 16 21:12:59.749337 master-0 kubenswrapper[7926]: I0216 21:12:59.749322 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" event={"ID":"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b","Type":"ContainerDied","Data":"073bfd97b3802cf7e422558b7f0d96ac1c7a887d6a785fb5000fa99850a0b06e"} Feb 16 21:12:59.749414 master-0 kubenswrapper[7926]: I0216 21:12:59.749368 7926 scope.go:117] "RemoveContainer" containerID="b1ac78292de0a544c15af274111c4e933c90f41d601dad32fc19d3dacdb54345" Feb 16 21:12:59.750572 master-0 kubenswrapper[7926]: I0216 21:12:59.750292 7926 scope.go:117] "RemoveContainer" containerID="073bfd97b3802cf7e422558b7f0d96ac1c7a887d6a785fb5000fa99850a0b06e" Feb 16 21:13:00.185178 master-0 kubenswrapper[7926]: I0216 21:13:00.185102 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:00.185178 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:00.185178 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:00.185178 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:00.185178 master-0 kubenswrapper[7926]: I0216 21:13:00.185166 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:00.755332 master-0 kubenswrapper[7926]: I0216 21:13:00.755249 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-qzs2g_1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/manager/1.log" Feb 16 21:13:00.756003 master-0 kubenswrapper[7926]: I0216 21:13:00.755610 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" event={"ID":"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b","Type":"ContainerStarted","Data":"cfd3ff2ce35aabfb3b796de6fbfb52e6ac44fbba7a139e8b846a35594c70ba5c"} Feb 16 21:13:00.756003 master-0 kubenswrapper[7926]: I0216 21:13:00.755838 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:13:01.184937 master-0 kubenswrapper[7926]: I0216 21:13:01.184783 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:01.184937 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:01.184937 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:01.184937 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:01.184937 master-0 kubenswrapper[7926]: I0216 21:13:01.184881 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:02.184891 master-0 kubenswrapper[7926]: I0216 21:13:02.184816 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:02.184891 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:02.184891 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:02.184891 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:02.184891 master-0 kubenswrapper[7926]: I0216 21:13:02.184886 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:02.740191 master-0 kubenswrapper[7926]: I0216 21:13:02.740062 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:13:02.740827 master-0 kubenswrapper[7926]: E0216 21:13:02.740756 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:13:02.772036 master-0 kubenswrapper[7926]: I0216 21:13:02.771881 7926 generic.go:334] "Generic (PLEG): container finished" podID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerID="4255d701755ee16eefc4f64ff2a1d87789d35c023038a0daf9f7cd0b69fb26a7" exitCode=0 Feb 16 21:13:02.772036 master-0 kubenswrapper[7926]: I0216 21:13:02.771946 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" event={"ID":"b28234d1-1d9a-4d9f-9ad1-e3c682bed492","Type":"ContainerDied","Data":"4255d701755ee16eefc4f64ff2a1d87789d35c023038a0daf9f7cd0b69fb26a7"} Feb 16 21:13:02.772036 master-0 kubenswrapper[7926]: I0216 21:13:02.772000 7926 scope.go:117] "RemoveContainer" containerID="1fdce62d33ee01800252ab5e608745339a8f0dbc0ccac60559c706daa3409f0f" Feb 16 21:13:02.773181 master-0 kubenswrapper[7926]: I0216 21:13:02.773111 7926 scope.go:117] "RemoveContainer" containerID="4255d701755ee16eefc4f64ff2a1d87789d35c023038a0daf9f7cd0b69fb26a7" Feb 16 21:13:03.185567 master-0 kubenswrapper[7926]: I0216 21:13:03.185433 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:03.185567 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:03.185567 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:03.185567 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:03.185567 master-0 kubenswrapper[7926]: I0216 21:13:03.185526 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:03.783191 master-0 kubenswrapper[7926]: I0216 21:13:03.783150 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" event={"ID":"b28234d1-1d9a-4d9f-9ad1-e3c682bed492","Type":"ContainerStarted","Data":"ad82b639a997ed0e5d8b2861e9f7c244d5b1a24c830d1de71432866846084c10"} Feb 16 21:13:03.783452 master-0 kubenswrapper[7926]: I0216 21:13:03.783428 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:13:03.785279 master-0 kubenswrapper[7926]: I0216 21:13:03.785246 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:13:04.185799 master-0 kubenswrapper[7926]: I0216 21:13:04.185608 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:04.185799 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:04.185799 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:04.185799 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:04.185799 master-0 kubenswrapper[7926]: I0216 21:13:04.185700 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:05.185135 master-0 kubenswrapper[7926]: I0216 21:13:05.185063 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:05.185135 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:05.185135 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:05.185135 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:05.185135 master-0 kubenswrapper[7926]: I0216 21:13:05.185121 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:06.185585 master-0 kubenswrapper[7926]: I0216 21:13:06.185489 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:06.185585 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:06.185585 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:06.185585 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:06.186359 master-0 kubenswrapper[7926]: I0216 21:13:06.185585 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:06.920238 master-0 kubenswrapper[7926]: I0216 21:13:06.920187 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:13:07.186251 master-0 kubenswrapper[7926]: I0216 21:13:07.186046 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:07.186251 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:07.186251 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:07.186251 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:07.186251 master-0 kubenswrapper[7926]: I0216 21:13:07.186207 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:08.185078 master-0 kubenswrapper[7926]: I0216 21:13:08.184986 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:08.185078 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:08.185078 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:08.185078 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:08.185078 master-0 kubenswrapper[7926]: I0216 21:13:08.185044 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:08.500837 master-0 kubenswrapper[7926]: E0216 21:13:08.500766 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Feb 16 21:13:08.746298 master-0 kubenswrapper[7926]: I0216 21:13:08.746174 7926 status_manager.go:851] "Failed to get status for pod" podUID="7fc3abc9-3012-43bd-af84-fc65baf82801" pod="openshift-kube-scheduler/installer-4-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-4-master-0)" Feb 16 21:13:09.186074 master-0 kubenswrapper[7926]: I0216 21:13:09.185967 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:09.186074 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:09.186074 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:09.186074 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:09.186554 master-0 kubenswrapper[7926]: I0216 21:13:09.186092 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:10.186542 master-0 kubenswrapper[7926]: I0216 21:13:10.186432 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:10.186542 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:10.186542 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:10.186542 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:10.186542 master-0 kubenswrapper[7926]: I0216 21:13:10.186532 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:10.739576 master-0 kubenswrapper[7926]: I0216 21:13:10.739479 7926 scope.go:117] "RemoveContainer" containerID="4007378c35279e107179280f5b478a33e451c6d5ec64c7c97a91228d94179cd2" Feb 16 21:13:10.740134 master-0 kubenswrapper[7926]: E0216 21:13:10.739946 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:13:11.186444 master-0 kubenswrapper[7926]: I0216 21:13:11.186352 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:11.186444 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:11.186444 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:11.186444 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:11.187162 master-0 kubenswrapper[7926]: I0216 21:13:11.186458 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:11.372586 master-0 kubenswrapper[7926]: E0216 21:13:11.372473 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 16 21:13:12.185988 master-0 kubenswrapper[7926]: I0216 21:13:12.185904 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:12.185988 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:12.185988 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:12.185988 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:12.186447 master-0 kubenswrapper[7926]: I0216 21:13:12.186022 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:12.556331 master-0 kubenswrapper[7926]: E0216 21:13:12.556268 7926 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8194cdc_3133_49e2_9579_a747c0bf2b16.slice/crio-4f5444c17822db01691b9d03f3dd6a819e814eea7a63f23ec45ece42ea5fba62.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:13:12.857419 master-0 kubenswrapper[7926]: I0216 21:13:12.857355 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-8kdgg_e8194cdc-3133-49e2-9579-a747c0bf2b16/manager/1.log" Feb 16 21:13:12.858447 master-0 kubenswrapper[7926]: I0216 21:13:12.858392 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-8kdgg_e8194cdc-3133-49e2-9579-a747c0bf2b16/manager/0.log" Feb 16 21:13:12.859116 master-0 kubenswrapper[7926]: I0216 21:13:12.859066 7926 generic.go:334] "Generic (PLEG): container finished" podID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerID="4f5444c17822db01691b9d03f3dd6a819e814eea7a63f23ec45ece42ea5fba62" exitCode=1 Feb 16 21:13:12.859203 master-0 kubenswrapper[7926]: I0216 21:13:12.859114 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" event={"ID":"e8194cdc-3133-49e2-9579-a747c0bf2b16","Type":"ContainerDied","Data":"4f5444c17822db01691b9d03f3dd6a819e814eea7a63f23ec45ece42ea5fba62"} Feb 16 21:13:12.859203 master-0 kubenswrapper[7926]: I0216 21:13:12.859155 7926 scope.go:117] "RemoveContainer" containerID="a76963335874f22d97778041d73ee6a0a7e3ffd325f9fb8a457626be3c8e5238" Feb 16 21:13:12.860002 master-0 kubenswrapper[7926]: I0216 21:13:12.859955 7926 scope.go:117] "RemoveContainer" containerID="4f5444c17822db01691b9d03f3dd6a819e814eea7a63f23ec45ece42ea5fba62" Feb 16 21:13:13.185691 master-0 kubenswrapper[7926]: I0216 21:13:13.185580 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:13.185691 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:13.185691 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:13.185691 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:13.186004 master-0 kubenswrapper[7926]: I0216 21:13:13.185729 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:13.868778 master-0 kubenswrapper[7926]: I0216 21:13:13.868739 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-8kdgg_e8194cdc-3133-49e2-9579-a747c0bf2b16/manager/1.log" Feb 16 21:13:13.869543 master-0 kubenswrapper[7926]: I0216 21:13:13.869515 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" event={"ID":"e8194cdc-3133-49e2-9579-a747c0bf2b16","Type":"ContainerStarted","Data":"e04872ea2c764c93d171f84352e60786a5be1d211e2a3194644c313a82c96c0c"} Feb 16 21:13:13.869815 master-0 kubenswrapper[7926]: I0216 21:13:13.869784 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:13:14.185428 master-0 kubenswrapper[7926]: I0216 21:13:14.185296 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:14.185428 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:14.185428 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:14.185428 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:14.185428 master-0 kubenswrapper[7926]: I0216 21:13:14.185354 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:15.184046 master-0 kubenswrapper[7926]: I0216 21:13:15.183994 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:15.184046 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:15.184046 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:15.184046 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:15.184609 master-0 kubenswrapper[7926]: I0216 21:13:15.184050 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:15.738497 master-0 kubenswrapper[7926]: I0216 21:13:15.738426 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:13:15.738790 master-0 kubenswrapper[7926]: E0216 21:13:15.738676 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:13:15.881172 master-0 kubenswrapper[7926]: I0216 21:13:15.880995 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/4.log" Feb 16 21:13:15.881758 master-0 kubenswrapper[7926]: I0216 21:13:15.881717 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/3.log" Feb 16 21:13:15.881837 master-0 kubenswrapper[7926]: I0216 21:13:15.881764 7926 generic.go:334] "Generic (PLEG): container finished" podID="b1ac9776-54c4-46ce-b898-01c8cf35e593" containerID="67a3e9d9b5f56d4ee0c0f00f8a41a1f28f49d33cce601ce8e280273be299fa4f" exitCode=1 Feb 16 21:13:15.881837 master-0 kubenswrapper[7926]: I0216 21:13:15.881795 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerDied","Data":"67a3e9d9b5f56d4ee0c0f00f8a41a1f28f49d33cce601ce8e280273be299fa4f"} Feb 16 21:13:15.881837 master-0 kubenswrapper[7926]: I0216 21:13:15.881827 7926 scope.go:117] "RemoveContainer" containerID="9ef3c9bb3006ad6560cc5f0bdef3d88ed02120a2aaa21f57602a6395354cc9ab" Feb 16 21:13:15.882457 master-0 kubenswrapper[7926]: I0216 21:13:15.882414 7926 scope.go:117] "RemoveContainer" containerID="67a3e9d9b5f56d4ee0c0f00f8a41a1f28f49d33cce601ce8e280273be299fa4f" Feb 16 21:13:15.882714 master-0 kubenswrapper[7926]: E0216 21:13:15.882681 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:13:16.184623 master-0 kubenswrapper[7926]: I0216 21:13:16.184518 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:16.184623 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:16.184623 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:16.184623 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:16.184623 master-0 kubenswrapper[7926]: I0216 21:13:16.184596 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:16.891698 master-0 kubenswrapper[7926]: I0216 21:13:16.891624 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/4.log" Feb 16 21:13:17.186004 master-0 kubenswrapper[7926]: I0216 21:13:17.185842 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:17.186004 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:17.186004 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:17.186004 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:17.186004 master-0 kubenswrapper[7926]: I0216 21:13:17.185908 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:18.186060 master-0 kubenswrapper[7926]: I0216 21:13:18.185982 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:18.186060 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:18.186060 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:18.186060 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:18.186060 master-0 kubenswrapper[7926]: I0216 21:13:18.186046 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:18.502229 master-0 kubenswrapper[7926]: E0216 21:13:18.502156 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Feb 16 21:13:18.502229 master-0 kubenswrapper[7926]: E0216 21:13:18.502216 7926 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 21:13:19.185701 master-0 kubenswrapper[7926]: I0216 21:13:19.185586 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:19.185701 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:19.185701 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:19.185701 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:19.185701 master-0 kubenswrapper[7926]: I0216 21:13:19.185695 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:20.187236 master-0 kubenswrapper[7926]: I0216 21:13:20.187109 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:20.187236 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:20.187236 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:20.187236 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:20.188379 master-0 kubenswrapper[7926]: I0216 21:13:20.187247 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:21.186407 master-0 kubenswrapper[7926]: I0216 21:13:21.186297 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:21.186407 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:21.186407 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:21.186407 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:21.186407 master-0 kubenswrapper[7926]: I0216 21:13:21.186375 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:22.186519 master-0 kubenswrapper[7926]: I0216 21:13:22.186399 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:22.186519 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:22.186519 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:22.186519 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:22.187689 master-0 kubenswrapper[7926]: I0216 21:13:22.186520 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:23.186101 master-0 kubenswrapper[7926]: I0216 21:13:23.185985 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:23.186101 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:23.186101 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:23.186101 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:23.186101 master-0 kubenswrapper[7926]: I0216 21:13:23.186102 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:24.187061 master-0 kubenswrapper[7926]: I0216 21:13:24.186947 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:24.187061 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:24.187061 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:24.187061 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:24.187061 master-0 kubenswrapper[7926]: I0216 21:13:24.187038 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:24.740070 master-0 kubenswrapper[7926]: I0216 21:13:24.739962 7926 scope.go:117] "RemoveContainer" containerID="4007378c35279e107179280f5b478a33e451c6d5ec64c7c97a91228d94179cd2" Feb 16 21:13:24.740555 master-0 kubenswrapper[7926]: E0216 21:13:24.740493 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:13:25.185822 master-0 kubenswrapper[7926]: I0216 21:13:25.185727 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:25.185822 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:25.185822 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:25.185822 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:25.186286 master-0 kubenswrapper[7926]: I0216 21:13:25.185843 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:26.185212 master-0 kubenswrapper[7926]: I0216 21:13:26.185134 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:26.185212 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:26.185212 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:26.185212 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:26.185212 master-0 kubenswrapper[7926]: I0216 21:13:26.185187 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:26.836126 master-0 kubenswrapper[7926]: I0216 21:13:26.836066 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:13:27.185686 master-0 kubenswrapper[7926]: I0216 21:13:27.185555 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:27.185686 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:27.185686 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:27.185686 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:27.185686 master-0 kubenswrapper[7926]: I0216 21:13:27.185640 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:27.773783 master-0 kubenswrapper[7926]: E0216 21:13:27.773635 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Feb 16 21:13:28.186123 master-0 kubenswrapper[7926]: I0216 21:13:28.185956 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:28.186123 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:28.186123 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:28.186123 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:28.186123 master-0 kubenswrapper[7926]: I0216 21:13:28.186060 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:28.739125 master-0 kubenswrapper[7926]: I0216 21:13:28.739042 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:13:28.739595 master-0 kubenswrapper[7926]: E0216 21:13:28.739530 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:13:29.184700 master-0 kubenswrapper[7926]: I0216 21:13:29.184624 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:29.184700 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:29.184700 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:29.184700 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:29.185052 master-0 kubenswrapper[7926]: I0216 21:13:29.184715 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:29.739157 master-0 kubenswrapper[7926]: I0216 21:13:29.738956 7926 scope.go:117] "RemoveContainer" containerID="67a3e9d9b5f56d4ee0c0f00f8a41a1f28f49d33cce601ce8e280273be299fa4f" Feb 16 21:13:29.739960 master-0 kubenswrapper[7926]: E0216 21:13:29.739326 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:13:30.185135 master-0 kubenswrapper[7926]: I0216 21:13:30.185070 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:30.185135 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:30.185135 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:30.185135 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:30.185496 master-0 kubenswrapper[7926]: I0216 21:13:30.185147 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:30.655341 master-0 kubenswrapper[7926]: E0216 21:13:30.655103 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894d5ea17a6f087 kube-system 9666 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 21:02:34 +0000 UTC,LastTimestamp:2026-02-16 21:11:25.981180357 +0000 UTC m=+857.616080657,Count:14,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:13:30.731001 master-0 kubenswrapper[7926]: E0216 21:13:30.730896 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 21:13:30.995331 master-0 kubenswrapper[7926]: I0216 21:13:30.995276 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-9m94g_4b035e85-b2b0-4dee-bb86-3465fc4b98a8/package-server-manager/1.log" Feb 16 21:13:30.996060 master-0 kubenswrapper[7926]: I0216 21:13:30.996021 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-9m94g_4b035e85-b2b0-4dee-bb86-3465fc4b98a8/package-server-manager/0.log" Feb 16 21:13:30.996622 master-0 kubenswrapper[7926]: I0216 21:13:30.996587 7926 generic.go:334] "Generic (PLEG): container finished" podID="4b035e85-b2b0-4dee-bb86-3465fc4b98a8" containerID="fa5e5b86ee6d022e914514c6e1b9bc40b0ded23b4d78a78dbc84ca8df5d3a2bd" exitCode=1 Feb 16 21:13:30.996717 master-0 kubenswrapper[7926]: I0216 21:13:30.996666 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" event={"ID":"4b035e85-b2b0-4dee-bb86-3465fc4b98a8","Type":"ContainerDied","Data":"fa5e5b86ee6d022e914514c6e1b9bc40b0ded23b4d78a78dbc84ca8df5d3a2bd"} Feb 16 21:13:30.996801 master-0 kubenswrapper[7926]: I0216 21:13:30.996773 7926 scope.go:117] "RemoveContainer" containerID="95cb75164641c9de6a0109a60c606bf650f57a11a7796ffdbcb05ca7aa385e4c" Feb 16 21:13:30.997430 master-0 kubenswrapper[7926]: I0216 21:13:30.997401 7926 scope.go:117] "RemoveContainer" containerID="fa5e5b86ee6d022e914514c6e1b9bc40b0ded23b4d78a78dbc84ca8df5d3a2bd" Feb 16 21:13:30.997733 master-0 kubenswrapper[7926]: E0216 21:13:30.997689 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=package-server-manager pod=package-server-manager-5c696dbdcd-9m94g_openshift-operator-lifecycle-manager(4b035e85-b2b0-4dee-bb86-3465fc4b98a8)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" podUID="4b035e85-b2b0-4dee-bb86-3465fc4b98a8" Feb 16 21:13:30.999755 master-0 kubenswrapper[7926]: I0216 21:13:30.999715 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-8pqbl_302156cc-9dca-4a66-9e6a-ba2c7e738c92/control-plane-machine-set-operator/1.log" Feb 16 21:13:31.000504 master-0 kubenswrapper[7926]: I0216 21:13:31.000470 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-8pqbl_302156cc-9dca-4a66-9e6a-ba2c7e738c92/control-plane-machine-set-operator/0.log" Feb 16 21:13:31.000589 master-0 kubenswrapper[7926]: I0216 21:13:31.000513 7926 generic.go:334] "Generic (PLEG): container finished" podID="302156cc-9dca-4a66-9e6a-ba2c7e738c92" containerID="cf5bd07d44ef1049857af620840ed7780e94db377ae50a689034fcd0589dd325" exitCode=1 Feb 16 21:13:31.000589 master-0 kubenswrapper[7926]: I0216 21:13:31.000545 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" event={"ID":"302156cc-9dca-4a66-9e6a-ba2c7e738c92","Type":"ContainerDied","Data":"cf5bd07d44ef1049857af620840ed7780e94db377ae50a689034fcd0589dd325"} Feb 16 21:13:31.001025 master-0 kubenswrapper[7926]: I0216 21:13:31.000988 7926 scope.go:117] "RemoveContainer" containerID="cf5bd07d44ef1049857af620840ed7780e94db377ae50a689034fcd0589dd325" Feb 16 21:13:31.076369 master-0 kubenswrapper[7926]: I0216 21:13:31.076332 7926 scope.go:117] "RemoveContainer" containerID="03d8daaa264d52b607ef3a2e1ee4da18d94e4e7433715288335ef0a92bd90db1" Feb 16 21:13:31.184962 master-0 kubenswrapper[7926]: I0216 21:13:31.184778 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:31.184962 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:31.184962 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:31.184962 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:31.184962 master-0 kubenswrapper[7926]: I0216 21:13:31.184838 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:31.658845 master-0 kubenswrapper[7926]: I0216 21:13:31.658687 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:13:31.658845 master-0 kubenswrapper[7926]: I0216 21:13:31.658818 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:13:32.008579 master-0 kubenswrapper[7926]: I0216 21:13:32.008520 7926 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="30c3311ac2594f90ee07f133990bc2e498e9439d4db71f3e17a8742c175c7b4f" exitCode=0 Feb 16 21:13:32.008579 master-0 kubenswrapper[7926]: I0216 21:13:32.008574 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"30c3311ac2594f90ee07f133990bc2e498e9439d4db71f3e17a8742c175c7b4f"} Feb 16 21:13:32.009177 master-0 kubenswrapper[7926]: I0216 21:13:32.008817 7926 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:13:32.009177 master-0 kubenswrapper[7926]: I0216 21:13:32.008829 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:13:32.010610 master-0 kubenswrapper[7926]: I0216 21:13:32.010580 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-8pqbl_302156cc-9dca-4a66-9e6a-ba2c7e738c92/control-plane-machine-set-operator/1.log" Feb 16 21:13:32.010703 master-0 kubenswrapper[7926]: I0216 21:13:32.010623 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" event={"ID":"302156cc-9dca-4a66-9e6a-ba2c7e738c92","Type":"ContainerStarted","Data":"f78d754f1df309b0cad8a0e20f5eb08891911c8e6d19e1d3fa298a8f6933a83c"} Feb 16 21:13:32.012254 master-0 kubenswrapper[7926]: I0216 21:13:32.012226 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-9m94g_4b035e85-b2b0-4dee-bb86-3465fc4b98a8/package-server-manager/1.log" Feb 16 21:13:32.012779 master-0 kubenswrapper[7926]: I0216 21:13:32.012752 7926 scope.go:117] "RemoveContainer" containerID="fa5e5b86ee6d022e914514c6e1b9bc40b0ded23b4d78a78dbc84ca8df5d3a2bd" Feb 16 21:13:32.012947 master-0 kubenswrapper[7926]: E0216 21:13:32.012917 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=package-server-manager pod=package-server-manager-5c696dbdcd-9m94g_openshift-operator-lifecycle-manager(4b035e85-b2b0-4dee-bb86-3465fc4b98a8)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" podUID="4b035e85-b2b0-4dee-bb86-3465fc4b98a8" Feb 16 21:13:32.185694 master-0 kubenswrapper[7926]: I0216 21:13:32.185592 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:32.185694 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:32.185694 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:32.185694 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:32.186142 master-0 kubenswrapper[7926]: I0216 21:13:32.185710 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:33.020145 master-0 kubenswrapper[7926]: I0216 21:13:33.020073 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb_ba294358-051a-4f09-b182-710d3d6778c5/machine-api-operator/0.log" Feb 16 21:13:33.021374 master-0 kubenswrapper[7926]: I0216 21:13:33.021322 7926 generic.go:334] "Generic (PLEG): container finished" podID="ba294358-051a-4f09-b182-710d3d6778c5" containerID="c7880afa219acb0ac5e4138682f8fc8b3e3931790fad2a804808d6e2f5933f3f" exitCode=255 Feb 16 21:13:33.021549 master-0 kubenswrapper[7926]: I0216 21:13:33.021397 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" event={"ID":"ba294358-051a-4f09-b182-710d3d6778c5","Type":"ContainerDied","Data":"c7880afa219acb0ac5e4138682f8fc8b3e3931790fad2a804808d6e2f5933f3f"} Feb 16 21:13:33.022313 master-0 kubenswrapper[7926]: I0216 21:13:33.022285 7926 scope.go:117] "RemoveContainer" containerID="c7880afa219acb0ac5e4138682f8fc8b3e3931790fad2a804808d6e2f5933f3f" Feb 16 21:13:33.185884 master-0 kubenswrapper[7926]: I0216 21:13:33.185817 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:33.185884 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:33.185884 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:33.185884 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:33.185884 master-0 kubenswrapper[7926]: I0216 21:13:33.185879 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:34.028491 master-0 kubenswrapper[7926]: I0216 21:13:34.028405 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb_ba294358-051a-4f09-b182-710d3d6778c5/machine-api-operator/0.log" Feb 16 21:13:34.029403 master-0 kubenswrapper[7926]: I0216 21:13:34.028835 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" event={"ID":"ba294358-051a-4f09-b182-710d3d6778c5","Type":"ContainerStarted","Data":"7e9f03ac4e3d4bf6f1a92c87252a343c03624e9e2d9c4c0aa92f759bfcd3bf24"} Feb 16 21:13:34.185118 master-0 kubenswrapper[7926]: I0216 21:13:34.185088 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:34.185118 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:34.185118 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:34.185118 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:34.185418 master-0 kubenswrapper[7926]: I0216 21:13:34.185395 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:35.185762 master-0 kubenswrapper[7926]: I0216 21:13:35.185685 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:35.185762 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:35.185762 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:35.185762 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:35.186791 master-0 kubenswrapper[7926]: I0216 21:13:35.185769 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:36.185873 master-0 kubenswrapper[7926]: I0216 21:13:36.185734 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:36.185873 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:36.185873 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:36.185873 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:36.187149 master-0 kubenswrapper[7926]: I0216 21:13:36.185882 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:37.186273 master-0 kubenswrapper[7926]: I0216 21:13:37.186128 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:37.186273 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:37.186273 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:37.186273 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:37.187403 master-0 kubenswrapper[7926]: I0216 21:13:37.186275 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:38.184971 master-0 kubenswrapper[7926]: I0216 21:13:38.184889 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:38.184971 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:38.184971 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:38.184971 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:38.184971 master-0 kubenswrapper[7926]: I0216 21:13:38.184951 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:38.652409 master-0 kubenswrapper[7926]: E0216 21:13:38.652200 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:13:28Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:13:28Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:13:28Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T21:13:28Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:13:39.185951 master-0 kubenswrapper[7926]: I0216 21:13:39.185866 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:39.185951 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:39.185951 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:39.185951 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:39.186415 master-0 kubenswrapper[7926]: I0216 21:13:39.185957 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:39.738245 master-0 kubenswrapper[7926]: I0216 21:13:39.738180 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:13:39.738936 master-0 kubenswrapper[7926]: I0216 21:13:39.738358 7926 scope.go:117] "RemoveContainer" containerID="4007378c35279e107179280f5b478a33e451c6d5ec64c7c97a91228d94179cd2" Feb 16 21:13:39.738936 master-0 kubenswrapper[7926]: E0216 21:13:39.738398 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:13:39.738936 master-0 kubenswrapper[7926]: E0216 21:13:39.738620 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:13:40.186271 master-0 kubenswrapper[7926]: I0216 21:13:40.186188 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:40.186271 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:40.186271 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:40.186271 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:40.186765 master-0 kubenswrapper[7926]: I0216 21:13:40.186273 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:41.082679 master-0 kubenswrapper[7926]: I0216 21:13:41.082579 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-kh4d4_2506c282-0b37-4ece-8a0c-885d0b7f7901/cluster-node-tuning-operator/1.log" Feb 16 21:13:41.083550 master-0 kubenswrapper[7926]: I0216 21:13:41.083035 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-kh4d4_2506c282-0b37-4ece-8a0c-885d0b7f7901/cluster-node-tuning-operator/0.log" Feb 16 21:13:41.083550 master-0 kubenswrapper[7926]: I0216 21:13:41.083072 7926 generic.go:334] "Generic (PLEG): container finished" podID="2506c282-0b37-4ece-8a0c-885d0b7f7901" containerID="c78e5502c7df20a63c6e359691ad6478f7f26c7822d2c31d3780654e26b107fb" exitCode=1 Feb 16 21:13:41.083550 master-0 kubenswrapper[7926]: I0216 21:13:41.083103 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" event={"ID":"2506c282-0b37-4ece-8a0c-885d0b7f7901","Type":"ContainerDied","Data":"c78e5502c7df20a63c6e359691ad6478f7f26c7822d2c31d3780654e26b107fb"} Feb 16 21:13:41.083550 master-0 kubenswrapper[7926]: I0216 21:13:41.083132 7926 scope.go:117] "RemoveContainer" containerID="24435a7f63a96b1a49a7d14efbc7fac8f5f69a776a662db4bff0a9f0d5933f6b" Feb 16 21:13:41.083957 master-0 kubenswrapper[7926]: I0216 21:13:41.083897 7926 scope.go:117] "RemoveContainer" containerID="c78e5502c7df20a63c6e359691ad6478f7f26c7822d2c31d3780654e26b107fb" Feb 16 21:13:41.185209 master-0 kubenswrapper[7926]: I0216 21:13:41.185145 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:41.185209 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:41.185209 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:41.185209 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:41.185847 master-0 kubenswrapper[7926]: I0216 21:13:41.185800 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:42.093693 master-0 kubenswrapper[7926]: I0216 21:13:42.093573 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-kh4d4_2506c282-0b37-4ece-8a0c-885d0b7f7901/cluster-node-tuning-operator/1.log" Feb 16 21:13:42.094685 master-0 kubenswrapper[7926]: I0216 21:13:42.093718 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" event={"ID":"2506c282-0b37-4ece-8a0c-885d0b7f7901","Type":"ContainerStarted","Data":"9f90d50c443b02c7e534aaa4189343a67e0f379619e2d5c07740a2f0b49e9999"} Feb 16 21:13:42.185428 master-0 kubenswrapper[7926]: I0216 21:13:42.185336 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:42.185428 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:42.185428 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:42.185428 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:42.185428 master-0 kubenswrapper[7926]: I0216 21:13:42.185418 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:42.738801 master-0 kubenswrapper[7926]: I0216 21:13:42.738725 7926 scope.go:117] "RemoveContainer" containerID="67a3e9d9b5f56d4ee0c0f00f8a41a1f28f49d33cce601ce8e280273be299fa4f" Feb 16 21:13:42.739174 master-0 kubenswrapper[7926]: E0216 21:13:42.739127 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:13:43.186037 master-0 kubenswrapper[7926]: I0216 21:13:43.185987 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:43.186037 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:43.186037 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:43.186037 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:43.187025 master-0 kubenswrapper[7926]: I0216 21:13:43.186944 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:44.108903 master-0 kubenswrapper[7926]: I0216 21:13:44.108790 7926 generic.go:334] "Generic (PLEG): container finished" podID="484154d0-66c8-4d0e-bf1b-f48d0abfe628" containerID="784108aeefea86df821b8787cc4aa96e0a0d0b443e8ed52de36e36ad7f22bb5e" exitCode=0 Feb 16 21:13:44.108903 master-0 kubenswrapper[7926]: I0216 21:13:44.108850 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" event={"ID":"484154d0-66c8-4d0e-bf1b-f48d0abfe628","Type":"ContainerDied","Data":"784108aeefea86df821b8787cc4aa96e0a0d0b443e8ed52de36e36ad7f22bb5e"} Feb 16 21:13:44.108903 master-0 kubenswrapper[7926]: I0216 21:13:44.108883 7926 scope.go:117] "RemoveContainer" containerID="fd75cc94a5c6af861419130cf9adb9c00eea8b412cbb5bebb25e798a841c1376" Feb 16 21:13:44.109835 master-0 kubenswrapper[7926]: I0216 21:13:44.109770 7926 scope.go:117] "RemoveContainer" containerID="784108aeefea86df821b8787cc4aa96e0a0d0b443e8ed52de36e36ad7f22bb5e" Feb 16 21:13:44.184882 master-0 kubenswrapper[7926]: I0216 21:13:44.184810 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:44.184882 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:44.184882 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:44.184882 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:44.185244 master-0 kubenswrapper[7926]: I0216 21:13:44.184893 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:44.775884 master-0 kubenswrapper[7926]: E0216 21:13:44.775744 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:13:45.125342 master-0 kubenswrapper[7926]: I0216 21:13:45.125156 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-kvhs4_065fcd43-1572-4152-b77b-a6b7ab52a081/machine-approver-controller/0.log" Feb 16 21:13:45.125766 master-0 kubenswrapper[7926]: I0216 21:13:45.125706 7926 generic.go:334] "Generic (PLEG): container finished" podID="065fcd43-1572-4152-b77b-a6b7ab52a081" containerID="577a19cb609733c40b24d16a4cfb15f4698079667a2b3110eeef59cec7643dff" exitCode=255 Feb 16 21:13:45.125874 master-0 kubenswrapper[7926]: I0216 21:13:45.125807 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" event={"ID":"065fcd43-1572-4152-b77b-a6b7ab52a081","Type":"ContainerDied","Data":"577a19cb609733c40b24d16a4cfb15f4698079667a2b3110eeef59cec7643dff"} Feb 16 21:13:45.126590 master-0 kubenswrapper[7926]: I0216 21:13:45.126536 7926 scope.go:117] "RemoveContainer" containerID="577a19cb609733c40b24d16a4cfb15f4698079667a2b3110eeef59cec7643dff" Feb 16 21:13:45.130118 master-0 kubenswrapper[7926]: I0216 21:13:45.130033 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" event={"ID":"484154d0-66c8-4d0e-bf1b-f48d0abfe628","Type":"ContainerStarted","Data":"51a19c0d4f3c8ae263edbdd5efb421daa153d0d3395961b41e2e334207be4195"} Feb 16 21:13:45.184237 master-0 kubenswrapper[7926]: I0216 21:13:45.184156 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:45.184237 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:45.184237 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:45.184237 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:45.184776 master-0 kubenswrapper[7926]: I0216 21:13:45.184255 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:46.143151 master-0 kubenswrapper[7926]: I0216 21:13:46.143050 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-kvhs4_065fcd43-1572-4152-b77b-a6b7ab52a081/machine-approver-controller/0.log" Feb 16 21:13:46.144101 master-0 kubenswrapper[7926]: I0216 21:13:46.143844 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" event={"ID":"065fcd43-1572-4152-b77b-a6b7ab52a081","Type":"ContainerStarted","Data":"6fa5335e554ef3afb4d68268a5f6f2e23524b3ac6a1926bda3c2a121662cce25"} Feb 16 21:13:46.185932 master-0 kubenswrapper[7926]: I0216 21:13:46.185826 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:46.185932 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:46.185932 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:46.185932 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:46.186302 master-0 kubenswrapper[7926]: I0216 21:13:46.185937 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:47.185889 master-0 kubenswrapper[7926]: I0216 21:13:47.185793 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:47.185889 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:47.185889 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:47.185889 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:47.185889 master-0 kubenswrapper[7926]: I0216 21:13:47.185871 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:47.738416 master-0 kubenswrapper[7926]: I0216 21:13:47.738334 7926 scope.go:117] "RemoveContainer" containerID="fa5e5b86ee6d022e914514c6e1b9bc40b0ded23b4d78a78dbc84ca8df5d3a2bd" Feb 16 21:13:48.159579 master-0 kubenswrapper[7926]: I0216 21:13:48.159497 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-9m94g_4b035e85-b2b0-4dee-bb86-3465fc4b98a8/package-server-manager/1.log" Feb 16 21:13:48.160193 master-0 kubenswrapper[7926]: I0216 21:13:48.160110 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" event={"ID":"4b035e85-b2b0-4dee-bb86-3465fc4b98a8","Type":"ContainerStarted","Data":"a6a2fb20def4cbde7b9bb47cdfdc79049f26b1950e4d47cb988ac8e11854652c"} Feb 16 21:13:48.160428 master-0 kubenswrapper[7926]: I0216 21:13:48.160381 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:13:48.185188 master-0 kubenswrapper[7926]: I0216 21:13:48.185094 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:48.185188 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:48.185188 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:48.185188 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:48.185188 master-0 kubenswrapper[7926]: I0216 21:13:48.185174 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:48.652684 master-0 kubenswrapper[7926]: E0216 21:13:48.652579 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:13:49.186099 master-0 kubenswrapper[7926]: I0216 21:13:49.186022 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:49.186099 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:49.186099 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:49.186099 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:49.186378 master-0 kubenswrapper[7926]: I0216 21:13:49.186122 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:50.184965 master-0 kubenswrapper[7926]: I0216 21:13:50.184900 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:50.184965 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:50.184965 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:50.184965 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:50.185943 master-0 kubenswrapper[7926]: I0216 21:13:50.184989 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:51.185763 master-0 kubenswrapper[7926]: I0216 21:13:51.185711 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:51.185763 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:51.185763 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:51.185763 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:51.186888 master-0 kubenswrapper[7926]: I0216 21:13:51.185781 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:51.739769 master-0 kubenswrapper[7926]: I0216 21:13:51.739686 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:13:51.740165 master-0 kubenswrapper[7926]: E0216 21:13:51.740096 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:13:52.185925 master-0 kubenswrapper[7926]: I0216 21:13:52.185823 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:52.185925 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:52.185925 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:52.185925 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:52.186945 master-0 kubenswrapper[7926]: I0216 21:13:52.185921 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:53.185235 master-0 kubenswrapper[7926]: I0216 21:13:53.185082 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:53.185235 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:53.185235 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:53.185235 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:53.185235 master-0 kubenswrapper[7926]: I0216 21:13:53.185174 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:53.202617 master-0 kubenswrapper[7926]: I0216 21:13:53.202569 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/4.log" Feb 16 21:13:53.203626 master-0 kubenswrapper[7926]: I0216 21:13:53.203593 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/3.log" Feb 16 21:13:53.204042 master-0 kubenswrapper[7926]: I0216 21:13:53.204009 7926 generic.go:334] "Generic (PLEG): container finished" podID="8b648d9e-a892-4951-b0e2-fed6b16273d4" containerID="6774523bbae3d7abd16dc2e39c9e808fff70ea7aaf2e57c4f294e7c707bbf785" exitCode=1 Feb 16 21:13:53.204106 master-0 kubenswrapper[7926]: I0216 21:13:53.204054 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerDied","Data":"6774523bbae3d7abd16dc2e39c9e808fff70ea7aaf2e57c4f294e7c707bbf785"} Feb 16 21:13:53.204106 master-0 kubenswrapper[7926]: I0216 21:13:53.204091 7926 scope.go:117] "RemoveContainer" containerID="41ef5f9abc41605ba4f43759411cc04f3fe23add167a10d83f8a22bd50eade97" Feb 16 21:13:53.204738 master-0 kubenswrapper[7926]: I0216 21:13:53.204703 7926 scope.go:117] "RemoveContainer" containerID="6774523bbae3d7abd16dc2e39c9e808fff70ea7aaf2e57c4f294e7c707bbf785" Feb 16 21:13:53.205038 master-0 kubenswrapper[7926]: E0216 21:13:53.205007 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:13:54.185392 master-0 kubenswrapper[7926]: I0216 21:13:54.185234 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:54.185392 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:54.185392 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:54.185392 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:54.185392 master-0 kubenswrapper[7926]: I0216 21:13:54.185304 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:54.212938 master-0 kubenswrapper[7926]: I0216 21:13:54.212890 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-557vd_1d7d0416-5f50-42bd-826b-92eecf9adcec/cluster-autoscaler-operator/0.log" Feb 16 21:13:54.213593 master-0 kubenswrapper[7926]: I0216 21:13:54.213543 7926 generic.go:334] "Generic (PLEG): container finished" podID="1d7d0416-5f50-42bd-826b-92eecf9adcec" containerID="2805492f11ff17f7e51a6fba30471dee89ec93e40bd6ce6db4b158be70c75964" exitCode=255 Feb 16 21:13:54.213691 master-0 kubenswrapper[7926]: I0216 21:13:54.213634 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" event={"ID":"1d7d0416-5f50-42bd-826b-92eecf9adcec","Type":"ContainerDied","Data":"2805492f11ff17f7e51a6fba30471dee89ec93e40bd6ce6db4b158be70c75964"} Feb 16 21:13:54.214367 master-0 kubenswrapper[7926]: I0216 21:13:54.214344 7926 scope.go:117] "RemoveContainer" containerID="2805492f11ff17f7e51a6fba30471dee89ec93e40bd6ce6db4b158be70c75964" Feb 16 21:13:54.215882 master-0 kubenswrapper[7926]: I0216 21:13:54.215836 7926 generic.go:334] "Generic (PLEG): container finished" podID="ff193060-a272-4e4e-990a-83ac410f523d" containerID="f5d1b2f95d0f407ab1fdd5eb9fe9deae1b8e8d536d017cfe9a03861815d4f96a" exitCode=0 Feb 16 21:13:54.215954 master-0 kubenswrapper[7926]: I0216 21:13:54.215903 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" event={"ID":"ff193060-a272-4e4e-990a-83ac410f523d","Type":"ContainerDied","Data":"f5d1b2f95d0f407ab1fdd5eb9fe9deae1b8e8d536d017cfe9a03861815d4f96a"} Feb 16 21:13:54.216401 master-0 kubenswrapper[7926]: I0216 21:13:54.216374 7926 scope.go:117] "RemoveContainer" containerID="f5d1b2f95d0f407ab1fdd5eb9fe9deae1b8e8d536d017cfe9a03861815d4f96a" Feb 16 21:13:54.218612 master-0 kubenswrapper[7926]: I0216 21:13:54.218587 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/4.log" Feb 16 21:13:54.739344 master-0 kubenswrapper[7926]: I0216 21:13:54.739229 7926 scope.go:117] "RemoveContainer" containerID="4007378c35279e107179280f5b478a33e451c6d5ec64c7c97a91228d94179cd2" Feb 16 21:13:55.184855 master-0 kubenswrapper[7926]: I0216 21:13:55.184697 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:55.184855 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:55.184855 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:55.184855 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:55.184855 master-0 kubenswrapper[7926]: I0216 21:13:55.184809 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:55.227009 master-0 kubenswrapper[7926]: I0216 21:13:55.226926 7926 generic.go:334] "Generic (PLEG): container finished" podID="408a9364-3730-4017-b1e4-c85d6a504168" containerID="ec8ce2b77f9d3d1712f1d9e5d59ca2196200eb54635d01b0d1caf94494809751" exitCode=0 Feb 16 21:13:55.227882 master-0 kubenswrapper[7926]: I0216 21:13:55.227033 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" event={"ID":"408a9364-3730-4017-b1e4-c85d6a504168","Type":"ContainerDied","Data":"ec8ce2b77f9d3d1712f1d9e5d59ca2196200eb54635d01b0d1caf94494809751"} Feb 16 21:13:55.227986 master-0 kubenswrapper[7926]: I0216 21:13:55.227953 7926 scope.go:117] "RemoveContainer" containerID="ec8ce2b77f9d3d1712f1d9e5d59ca2196200eb54635d01b0d1caf94494809751" Feb 16 21:13:55.229108 master-0 kubenswrapper[7926]: I0216 21:13:55.229063 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-557vd_1d7d0416-5f50-42bd-826b-92eecf9adcec/cluster-autoscaler-operator/0.log" Feb 16 21:13:55.229512 master-0 kubenswrapper[7926]: I0216 21:13:55.229448 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" event={"ID":"1d7d0416-5f50-42bd-826b-92eecf9adcec","Type":"ContainerStarted","Data":"62b487940e9059c7edfccc46f4b46f6733b0bfea4f437b53500d0c8a0ca74fd9"} Feb 16 21:13:55.232022 master-0 kubenswrapper[7926]: I0216 21:13:55.231876 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/4.log" Feb 16 21:13:55.232805 master-0 kubenswrapper[7926]: I0216 21:13:55.232243 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce"} Feb 16 21:13:55.235109 master-0 kubenswrapper[7926]: I0216 21:13:55.234720 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" event={"ID":"ff193060-a272-4e4e-990a-83ac410f523d","Type":"ContainerStarted","Data":"ef43fbfc945aa678d642581bba1ac8119a0675069fc72b0537960c8e21934061"} Feb 16 21:13:56.186630 master-0 kubenswrapper[7926]: I0216 21:13:56.186532 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:56.186630 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:56.186630 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:56.186630 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:56.186630 master-0 kubenswrapper[7926]: I0216 21:13:56.186625 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:56.244865 master-0 kubenswrapper[7926]: I0216 21:13:56.244763 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" event={"ID":"408a9364-3730-4017-b1e4-c85d6a504168","Type":"ContainerStarted","Data":"998c9ae589b8ae43e110fa0bf1929dd53f4179a605ee219bd9e74970ce1b2465"} Feb 16 21:13:56.246239 master-0 kubenswrapper[7926]: I0216 21:13:56.245569 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:13:56.249538 master-0 kubenswrapper[7926]: I0216 21:13:56.249454 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:13:56.739012 master-0 kubenswrapper[7926]: I0216 21:13:56.738856 7926 scope.go:117] "RemoveContainer" containerID="67a3e9d9b5f56d4ee0c0f00f8a41a1f28f49d33cce601ce8e280273be299fa4f" Feb 16 21:13:56.739439 master-0 kubenswrapper[7926]: E0216 21:13:56.739360 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:13:57.185270 master-0 kubenswrapper[7926]: I0216 21:13:57.185199 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:57.185270 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:57.185270 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:57.185270 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:57.185270 master-0 kubenswrapper[7926]: I0216 21:13:57.185269 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:58.185307 master-0 kubenswrapper[7926]: I0216 21:13:58.185186 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:58.185307 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:58.185307 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:58.185307 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:58.185862 master-0 kubenswrapper[7926]: I0216 21:13:58.185333 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:13:58.653328 master-0 kubenswrapper[7926]: E0216 21:13:58.653230 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:13:59.185953 master-0 kubenswrapper[7926]: I0216 21:13:59.185867 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:13:59.185953 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:13:59.185953 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:13:59.185953 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:13:59.186716 master-0 kubenswrapper[7926]: I0216 21:13:59.185952 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:00.185815 master-0 kubenswrapper[7926]: I0216 21:14:00.185104 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:00.185815 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:00.185815 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:00.185815 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:00.185815 master-0 kubenswrapper[7926]: I0216 21:14:00.185168 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:01.185346 master-0 kubenswrapper[7926]: I0216 21:14:01.185288 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:01.185346 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:01.185346 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:01.185346 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:01.185782 master-0 kubenswrapper[7926]: I0216 21:14:01.185360 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:01.185782 master-0 kubenswrapper[7926]: I0216 21:14:01.185413 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:14:01.186157 master-0 kubenswrapper[7926]: I0216 21:14:01.186114 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"f1ed58b2ccf00425ebf16fa5a6dffc055e3422108b96a5f2732ff92f9613603a"} pod="openshift-ingress/router-default-864ddd5f56-z4bnk" containerMessage="Container router failed startup probe, will be restarted" Feb 16 21:14:01.186562 master-0 kubenswrapper[7926]: I0216 21:14:01.186165 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" containerID="cri-o://f1ed58b2ccf00425ebf16fa5a6dffc055e3422108b96a5f2732ff92f9613603a" gracePeriod=3600 Feb 16 21:14:01.777103 master-0 kubenswrapper[7926]: E0216 21:14:01.776993 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:14:03.739254 master-0 kubenswrapper[7926]: I0216 21:14:03.739157 7926 scope.go:117] "RemoveContainer" containerID="6774523bbae3d7abd16dc2e39c9e808fff70ea7aaf2e57c4f294e7c707bbf785" Feb 16 21:14:03.740545 master-0 kubenswrapper[7926]: E0216 21:14:03.739555 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:14:04.658853 master-0 kubenswrapper[7926]: E0216 21:14:04.658559 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894d5b0c584e6d3 kube-system 9394 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:28 +0000 UTC,LastTimestamp:2026-02-16 21:11:26.210440269 +0000 UTC m=+857.845340569,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:14:05.738426 master-0 kubenswrapper[7926]: I0216 21:14:05.738250 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:14:06.011768 master-0 kubenswrapper[7926]: E0216 21:14:06.011700 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 21:14:06.319696 master-0 kubenswrapper[7926]: I0216 21:14:06.319570 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406"} Feb 16 21:14:06.322306 master-0 kubenswrapper[7926]: I0216 21:14:06.322284 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"f05e9f801d429c919b941187b2782d4308239d42ccb37b0311a3c95f1e719297"} Feb 16 21:14:06.710204 master-0 kubenswrapper[7926]: I0216 21:14:06.710148 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:14:07.336687 master-0 kubenswrapper[7926]: I0216 21:14:07.336577 7926 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:14:07.336687 master-0 kubenswrapper[7926]: I0216 21:14:07.336620 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:14:07.337899 master-0 kubenswrapper[7926]: I0216 21:14:07.336795 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"e441fffa8b12dc73c314b5893d29a697010cb53854ce90d32eb7b68a2f5ca29e"} Feb 16 21:14:07.337899 master-0 kubenswrapper[7926]: I0216 21:14:07.336831 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"863a190d51b525e4103773cf5a7867cf67cf97e7a4a1ede81363f11e4c1dd6b7"} Feb 16 21:14:07.337899 master-0 kubenswrapper[7926]: I0216 21:14:07.336840 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"d8eac33db7a92bab03def14450dd1750a954d1d9b9cc124c7deead003bb6996a"} Feb 16 21:14:07.337899 master-0 kubenswrapper[7926]: I0216 21:14:07.336851 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"8fc7fdd0d480b1fd68681ee30d8785c154cbf24f0c4e8319840eb7818ec82950"} Feb 16 21:14:08.654360 master-0 kubenswrapper[7926]: E0216 21:14:08.654281 7926 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:14:08.748321 master-0 kubenswrapper[7926]: I0216 21:14:08.748234 7926 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-controller-manager-master-0)" Feb 16 21:14:09.711288 master-0 kubenswrapper[7926]: I0216 21:14:09.711189 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:14:11.739740 master-0 kubenswrapper[7926]: I0216 21:14:11.739602 7926 scope.go:117] "RemoveContainer" containerID="67a3e9d9b5f56d4ee0c0f00f8a41a1f28f49d33cce601ce8e280273be299fa4f" Feb 16 21:14:11.740402 master-0 kubenswrapper[7926]: E0216 21:14:11.740103 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:14:11.767931 master-0 kubenswrapper[7926]: I0216 21:14:11.767811 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 16 21:14:11.767931 master-0 kubenswrapper[7926]: I0216 21:14:11.767940 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 16 21:14:15.979965 master-0 kubenswrapper[7926]: I0216 21:14:15.979901 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:14:17.313689 master-0 kubenswrapper[7926]: E0216 21:14:17.313577 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-l44qd" podUID="0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b" Feb 16 21:14:17.409785 master-0 kubenswrapper[7926]: I0216 21:14:17.409688 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:14:17.739201 master-0 kubenswrapper[7926]: I0216 21:14:17.739023 7926 scope.go:117] "RemoveContainer" containerID="6774523bbae3d7abd16dc2e39c9e808fff70ea7aaf2e57c4f294e7c707bbf785" Feb 16 21:14:17.739501 master-0 kubenswrapper[7926]: E0216 21:14:17.739453 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:14:18.778196 master-0 kubenswrapper[7926]: E0216 21:14:18.778079 7926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 21:14:19.710949 master-0 kubenswrapper[7926]: I0216 21:14:19.710820 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 21:14:20.812023 master-0 kubenswrapper[7926]: I0216 21:14:20.811948 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:14:20.812726 master-0 kubenswrapper[7926]: E0216 21:14:20.812139 7926 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 21:14:20.812726 master-0 kubenswrapper[7926]: E0216 21:14:20.812235 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert podName:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b nodeName:}" failed. No retries permitted until 2026-02-16 21:16:22.812212115 +0000 UTC m=+1154.447112505 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert") pod "ingress-canary-l44qd" (UID: "0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b") : secret "canary-serving-cert" not found Feb 16 21:14:21.664884 master-0 kubenswrapper[7926]: I0216 21:14:21.664781 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:14:21.787102 master-0 kubenswrapper[7926]: I0216 21:14:21.787039 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 16 21:14:25.738642 master-0 kubenswrapper[7926]: I0216 21:14:25.738555 7926 scope.go:117] "RemoveContainer" containerID="67a3e9d9b5f56d4ee0c0f00f8a41a1f28f49d33cce601ce8e280273be299fa4f" Feb 16 21:14:25.739941 master-0 kubenswrapper[7926]: E0216 21:14:25.738825 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:14:26.798304 master-0 kubenswrapper[7926]: I0216 21:14:26.798205 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 16 21:14:29.711200 master-0 kubenswrapper[7926]: I0216 21:14:29.711051 7926 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded" Feb 16 21:14:29.712062 master-0 kubenswrapper[7926]: I0216 21:14:29.711228 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:14:29.712342 master-0 kubenswrapper[7926]: I0216 21:14:29.712272 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 21:14:29.712491 master-0 kubenswrapper[7926]: I0216 21:14:29.712414 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" containerID="cri-o://a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" gracePeriod=30 Feb 16 21:14:29.739937 master-0 kubenswrapper[7926]: I0216 21:14:29.739856 7926 scope.go:117] "RemoveContainer" containerID="6774523bbae3d7abd16dc2e39c9e808fff70ea7aaf2e57c4f294e7c707bbf785" Feb 16 21:14:29.740428 master-0 kubenswrapper[7926]: E0216 21:14:29.740371 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:14:29.835879 master-0 kubenswrapper[7926]: E0216 21:14:29.835780 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:14:30.522147 master-0 kubenswrapper[7926]: I0216 21:14:30.522081 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" exitCode=2 Feb 16 21:14:30.522447 master-0 kubenswrapper[7926]: I0216 21:14:30.522207 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406"} Feb 16 21:14:30.522447 master-0 kubenswrapper[7926]: I0216 21:14:30.522331 7926 scope.go:117] "RemoveContainer" containerID="b62e91fd80c5fe5b3e86f231592d3a6b2b476717e7f1ec56b415d7521e1bb557" Feb 16 21:14:30.523169 master-0 kubenswrapper[7926]: I0216 21:14:30.523115 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:14:30.523511 master-0 kubenswrapper[7926]: E0216 21:14:30.523462 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:14:36.738871 master-0 kubenswrapper[7926]: I0216 21:14:36.738778 7926 scope.go:117] "RemoveContainer" containerID="67a3e9d9b5f56d4ee0c0f00f8a41a1f28f49d33cce601ce8e280273be299fa4f" Feb 16 21:14:36.988645 master-0 kubenswrapper[7926]: I0216 21:14:36.988566 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:14:36.989398 master-0 kubenswrapper[7926]: I0216 21:14:36.989295 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:14:36.989593 master-0 kubenswrapper[7926]: E0216 21:14:36.989551 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:14:37.583645 master-0 kubenswrapper[7926]: I0216 21:14:37.583575 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/4.log" Feb 16 21:14:37.583645 master-0 kubenswrapper[7926]: I0216 21:14:37.583667 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerStarted","Data":"473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6"} Feb 16 21:14:38.663051 master-0 kubenswrapper[7926]: E0216 21:14:38.662806 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894d5b0c674492b kube-system 9396 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:58:28 +0000 UTC,LastTimestamp:2026-02-16 21:11:26.22301743 +0000 UTC m=+857.857917730,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:14:40.738840 master-0 kubenswrapper[7926]: I0216 21:14:40.738781 7926 scope.go:117] "RemoveContainer" containerID="6774523bbae3d7abd16dc2e39c9e808fff70ea7aaf2e57c4f294e7c707bbf785" Feb 16 21:14:40.740218 master-0 kubenswrapper[7926]: E0216 21:14:40.740148 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:14:41.339812 master-0 kubenswrapper[7926]: E0216 21:14:41.339718 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 21:14:41.620113 master-0 kubenswrapper[7926]: I0216 21:14:41.619953 7926 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:14:41.620113 master-0 kubenswrapper[7926]: I0216 21:14:41.620024 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:14:42.588192 master-0 kubenswrapper[7926]: E0216 21:14:42.588044 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" podUID="a0b7a368-1408-4fc3-ae25-4613b74e7fca" Feb 16 21:14:42.626211 master-0 kubenswrapper[7926]: I0216 21:14:42.626117 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:14:43.366762 master-0 kubenswrapper[7926]: I0216 21:14:43.366618 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:14:43.367089 master-0 kubenswrapper[7926]: E0216 21:14:43.367000 7926 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 21:14:43.367262 master-0 kubenswrapper[7926]: E0216 21:14:43.367212 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:16:45.367175186 +0000 UTC m=+1177.002075526 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : secret "prometheus-operator-tls" not found Feb 16 21:14:47.673476 master-0 kubenswrapper[7926]: I0216 21:14:47.673348 7926 generic.go:334] "Generic (PLEG): container finished" podID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerID="f1ed58b2ccf00425ebf16fa5a6dffc055e3422108b96a5f2732ff92f9613603a" exitCode=0 Feb 16 21:14:47.674530 master-0 kubenswrapper[7926]: I0216 21:14:47.673464 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerDied","Data":"f1ed58b2ccf00425ebf16fa5a6dffc055e3422108b96a5f2732ff92f9613603a"} Feb 16 21:14:47.674530 master-0 kubenswrapper[7926]: I0216 21:14:47.673587 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerStarted","Data":"998a9ae2beb3b1a75e1664da2f38a4c4498101aa5035a2ceca565eb8eafef20a"} Feb 16 21:14:47.674530 master-0 kubenswrapper[7926]: I0216 21:14:47.673676 7926 scope.go:117] "RemoveContainer" containerID="922b3b9a2ab72ca8bb93946974e3710fc89f41db642b5f99391c37114b12712f" Feb 16 21:14:47.740773 master-0 kubenswrapper[7926]: I0216 21:14:47.740626 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:14:47.741312 master-0 kubenswrapper[7926]: E0216 21:14:47.741233 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:14:48.183071 master-0 kubenswrapper[7926]: I0216 21:14:48.182950 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:14:48.187184 master-0 kubenswrapper[7926]: I0216 21:14:48.187105 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:48.187184 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:48.187184 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:48.187184 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:48.187564 master-0 kubenswrapper[7926]: I0216 21:14:48.187190 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:49.188207 master-0 kubenswrapper[7926]: I0216 21:14:49.188052 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:49.188207 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:49.188207 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:49.188207 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:49.189260 master-0 kubenswrapper[7926]: I0216 21:14:49.188232 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:50.186173 master-0 kubenswrapper[7926]: I0216 21:14:50.186060 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:50.186173 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:50.186173 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:50.186173 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:50.186733 master-0 kubenswrapper[7926]: I0216 21:14:50.186190 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:51.186299 master-0 kubenswrapper[7926]: I0216 21:14:51.186208 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:51.186299 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:51.186299 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:51.186299 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:51.187121 master-0 kubenswrapper[7926]: I0216 21:14:51.186301 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:51.739378 master-0 kubenswrapper[7926]: I0216 21:14:51.739218 7926 scope.go:117] "RemoveContainer" containerID="6774523bbae3d7abd16dc2e39c9e808fff70ea7aaf2e57c4f294e7c707bbf785" Feb 16 21:14:51.739927 master-0 kubenswrapper[7926]: E0216 21:14:51.739836 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:14:52.185765 master-0 kubenswrapper[7926]: I0216 21:14:52.185625 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:52.185765 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:52.185765 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:52.185765 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:52.186102 master-0 kubenswrapper[7926]: I0216 21:14:52.185824 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:53.187117 master-0 kubenswrapper[7926]: I0216 21:14:53.187005 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:53.187117 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:53.187117 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:53.187117 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:53.188357 master-0 kubenswrapper[7926]: I0216 21:14:53.187137 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:54.186880 master-0 kubenswrapper[7926]: I0216 21:14:54.186752 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:54.186880 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:54.186880 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:54.186880 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:54.188063 master-0 kubenswrapper[7926]: I0216 21:14:54.186923 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:55.182413 master-0 kubenswrapper[7926]: I0216 21:14:55.182288 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:14:55.184892 master-0 kubenswrapper[7926]: I0216 21:14:55.184840 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:55.184892 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:55.184892 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:55.184892 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:55.185091 master-0 kubenswrapper[7926]: I0216 21:14:55.184908 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:56.187411 master-0 kubenswrapper[7926]: I0216 21:14:56.187320 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:56.187411 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:56.187411 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:56.187411 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:56.189385 master-0 kubenswrapper[7926]: I0216 21:14:56.187442 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:57.185208 master-0 kubenswrapper[7926]: I0216 21:14:57.185083 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:57.185208 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:57.185208 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:57.185208 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:57.185949 master-0 kubenswrapper[7926]: I0216 21:14:57.185229 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:58.185535 master-0 kubenswrapper[7926]: I0216 21:14:58.185452 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:58.185535 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:58.185535 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:58.185535 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:58.186372 master-0 kubenswrapper[7926]: I0216 21:14:58.185563 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:59.186412 master-0 kubenswrapper[7926]: I0216 21:14:59.186327 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:14:59.186412 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:14:59.186412 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:14:59.186412 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:14:59.187395 master-0 kubenswrapper[7926]: I0216 21:14:59.186436 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:14:59.738877 master-0 kubenswrapper[7926]: I0216 21:14:59.738801 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:14:59.739462 master-0 kubenswrapper[7926]: E0216 21:14:59.739088 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:15:00.185016 master-0 kubenswrapper[7926]: I0216 21:15:00.184947 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:00.185016 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:00.185016 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:00.185016 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:00.185016 master-0 kubenswrapper[7926]: I0216 21:15:00.185011 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:01.185106 master-0 kubenswrapper[7926]: I0216 21:15:01.185030 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:01.185106 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:01.185106 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:01.185106 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:01.185106 master-0 kubenswrapper[7926]: I0216 21:15:01.185090 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:02.184721 master-0 kubenswrapper[7926]: I0216 21:15:02.184667 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:02.184721 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:02.184721 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:02.184721 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:02.184993 master-0 kubenswrapper[7926]: I0216 21:15:02.184737 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:02.738811 master-0 kubenswrapper[7926]: I0216 21:15:02.738754 7926 scope.go:117] "RemoveContainer" containerID="6774523bbae3d7abd16dc2e39c9e808fff70ea7aaf2e57c4f294e7c707bbf785" Feb 16 21:15:03.185069 master-0 kubenswrapper[7926]: I0216 21:15:03.184988 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:03.185069 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:03.185069 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:03.185069 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:03.185069 master-0 kubenswrapper[7926]: I0216 21:15:03.185060 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:03.806915 master-0 kubenswrapper[7926]: I0216 21:15:03.806852 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/4.log" Feb 16 21:15:03.807513 master-0 kubenswrapper[7926]: I0216 21:15:03.807350 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"6a46714853e2a885d7f0ea06667526f3f7b240b0bd635da8d5cae43fd1dadc87"} Feb 16 21:15:04.186369 master-0 kubenswrapper[7926]: I0216 21:15:04.186232 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:04.186369 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:04.186369 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:04.186369 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:04.186369 master-0 kubenswrapper[7926]: I0216 21:15:04.186289 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:05.185435 master-0 kubenswrapper[7926]: I0216 21:15:05.185344 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:05.185435 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:05.185435 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:05.185435 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:05.185435 master-0 kubenswrapper[7926]: I0216 21:15:05.185443 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:06.186784 master-0 kubenswrapper[7926]: I0216 21:15:06.186690 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:06.186784 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:06.186784 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:06.186784 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:06.188449 master-0 kubenswrapper[7926]: I0216 21:15:06.186809 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:07.186956 master-0 kubenswrapper[7926]: I0216 21:15:07.186493 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:07.186956 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:07.186956 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:07.186956 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:07.186956 master-0 kubenswrapper[7926]: I0216 21:15:07.186632 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:07.848128 master-0 kubenswrapper[7926]: I0216 21:15:07.848022 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/5.log" Feb 16 21:15:07.848782 master-0 kubenswrapper[7926]: I0216 21:15:07.848731 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/4.log" Feb 16 21:15:07.848907 master-0 kubenswrapper[7926]: I0216 21:15:07.848794 7926 generic.go:334] "Generic (PLEG): container finished" podID="b1ac9776-54c4-46ce-b898-01c8cf35e593" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" exitCode=1 Feb 16 21:15:07.848907 master-0 kubenswrapper[7926]: I0216 21:15:07.848830 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerDied","Data":"473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6"} Feb 16 21:15:07.848907 master-0 kubenswrapper[7926]: I0216 21:15:07.848865 7926 scope.go:117] "RemoveContainer" containerID="67a3e9d9b5f56d4ee0c0f00f8a41a1f28f49d33cce601ce8e280273be299fa4f" Feb 16 21:15:07.849840 master-0 kubenswrapper[7926]: I0216 21:15:07.849785 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:15:07.850235 master-0 kubenswrapper[7926]: E0216 21:15:07.850170 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:15:08.186005 master-0 kubenswrapper[7926]: I0216 21:15:08.185733 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:08.186005 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:08.186005 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:08.186005 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:08.186005 master-0 kubenswrapper[7926]: I0216 21:15:08.185842 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:08.749751 master-0 kubenswrapper[7926]: I0216 21:15:08.749584 7926 status_manager.go:851] "Failed to get status for pod" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods router-default-864ddd5f56-z4bnk)" Feb 16 21:15:08.861384 master-0 kubenswrapper[7926]: I0216 21:15:08.861252 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/5.log" Feb 16 21:15:09.186466 master-0 kubenswrapper[7926]: I0216 21:15:09.186243 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:09.186466 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:09.186466 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:09.186466 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:09.186466 master-0 kubenswrapper[7926]: I0216 21:15:09.186356 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:10.186243 master-0 kubenswrapper[7926]: I0216 21:15:10.186097 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:10.186243 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:10.186243 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:10.186243 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:10.187462 master-0 kubenswrapper[7926]: I0216 21:15:10.186240 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:11.186573 master-0 kubenswrapper[7926]: I0216 21:15:11.186449 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:11.186573 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:11.186573 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:11.186573 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:11.187887 master-0 kubenswrapper[7926]: I0216 21:15:11.186599 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:12.186781 master-0 kubenswrapper[7926]: I0216 21:15:12.186439 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:12.186781 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:12.186781 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:12.186781 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:12.187939 master-0 kubenswrapper[7926]: I0216 21:15:12.186811 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:12.666943 master-0 kubenswrapper[7926]: E0216 21:15:12.666682 7926 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-tpj6f.1894d5c27d632ff5 openshift-network-node-identity 9499 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-tpj6f,UID:88f19cea-60ed-4977-a906-75deec51fc3d,APIVersion:v1,ResourceVersion:3333,FieldPath:spec.containers{approver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 20:59:44 +0000 UTC,LastTimestamp:2026-02-16 21:11:58.199129195 +0000 UTC m=+889.834029525,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:15:12.739697 master-0 kubenswrapper[7926]: I0216 21:15:12.739571 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:15:12.740164 master-0 kubenswrapper[7926]: E0216 21:15:12.739963 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:15:13.186898 master-0 kubenswrapper[7926]: I0216 21:15:13.186637 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:13.186898 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:13.186898 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:13.186898 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:13.186898 master-0 kubenswrapper[7926]: I0216 21:15:13.186882 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:14.186115 master-0 kubenswrapper[7926]: I0216 21:15:14.186010 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:14.186115 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:14.186115 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:14.186115 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:14.186115 master-0 kubenswrapper[7926]: I0216 21:15:14.186115 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:15.185016 master-0 kubenswrapper[7926]: I0216 21:15:15.184911 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:15.185016 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:15.185016 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:15.185016 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:15.185618 master-0 kubenswrapper[7926]: I0216 21:15:15.185058 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:15.623807 master-0 kubenswrapper[7926]: E0216 21:15:15.623684 7926 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 16 21:15:16.185852 master-0 kubenswrapper[7926]: I0216 21:15:16.185774 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:16.185852 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:16.185852 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:16.185852 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:16.186610 master-0 kubenswrapper[7926]: I0216 21:15:16.185863 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:17.185391 master-0 kubenswrapper[7926]: I0216 21:15:17.185273 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:17.185391 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:17.185391 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:17.185391 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:17.185982 master-0 kubenswrapper[7926]: I0216 21:15:17.185396 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:17.949505 master-0 kubenswrapper[7926]: I0216 21:15:17.949404 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/5.log" Feb 16 21:15:17.950224 master-0 kubenswrapper[7926]: I0216 21:15:17.950171 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/4.log" Feb 16 21:15:17.951054 master-0 kubenswrapper[7926]: I0216 21:15:17.950963 7926 generic.go:334] "Generic (PLEG): container finished" podID="cef33294-81fb-41a2-811d-2565f94514d1" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" exitCode=1 Feb 16 21:15:17.951054 master-0 kubenswrapper[7926]: I0216 21:15:17.951026 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerDied","Data":"a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce"} Feb 16 21:15:17.951268 master-0 kubenswrapper[7926]: I0216 21:15:17.951093 7926 scope.go:117] "RemoveContainer" containerID="4007378c35279e107179280f5b478a33e451c6d5ec64c7c97a91228d94179cd2" Feb 16 21:15:17.951905 master-0 kubenswrapper[7926]: I0216 21:15:17.951852 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:15:17.952994 master-0 kubenswrapper[7926]: E0216 21:15:17.952378 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:15:18.186532 master-0 kubenswrapper[7926]: I0216 21:15:18.186410 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:18.186532 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:18.186532 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:18.186532 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:18.187643 master-0 kubenswrapper[7926]: I0216 21:15:18.186545 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:18.961612 master-0 kubenswrapper[7926]: I0216 21:15:18.961510 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/5.log" Feb 16 21:15:19.186052 master-0 kubenswrapper[7926]: I0216 21:15:19.185971 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:19.186052 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:19.186052 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:19.186052 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:19.186333 master-0 kubenswrapper[7926]: I0216 21:15:19.186063 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:20.186726 master-0 kubenswrapper[7926]: I0216 21:15:20.186601 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:20.186726 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:20.186726 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:20.186726 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:20.187758 master-0 kubenswrapper[7926]: I0216 21:15:20.186740 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:21.185667 master-0 kubenswrapper[7926]: I0216 21:15:21.185585 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:21.185667 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:21.185667 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:21.185667 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:21.185964 master-0 kubenswrapper[7926]: I0216 21:15:21.185687 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:21.985107 master-0 kubenswrapper[7926]: I0216 21:15:21.984983 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/5.log" Feb 16 21:15:21.986313 master-0 kubenswrapper[7926]: I0216 21:15:21.985779 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/4.log" Feb 16 21:15:21.986313 master-0 kubenswrapper[7926]: I0216 21:15:21.985866 7926 generic.go:334] "Generic (PLEG): container finished" podID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" containerID="4b9eed56cd9de27df8732f0bf589198f3bec398bab1ee5d8d5d4047198bdc2b3" exitCode=1 Feb 16 21:15:21.986313 master-0 kubenswrapper[7926]: I0216 21:15:21.985957 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerDied","Data":"4b9eed56cd9de27df8732f0bf589198f3bec398bab1ee5d8d5d4047198bdc2b3"} Feb 16 21:15:21.986313 master-0 kubenswrapper[7926]: I0216 21:15:21.986006 7926 scope.go:117] "RemoveContainer" containerID="35ed53f7c30fa9921f8cd975c0172c21b8f110abc5d358e84c90a7ea7b1226a7" Feb 16 21:15:21.987932 master-0 kubenswrapper[7926]: I0216 21:15:21.987170 7926 scope.go:117] "RemoveContainer" containerID="4b9eed56cd9de27df8732f0bf589198f3bec398bab1ee5d8d5d4047198bdc2b3" Feb 16 21:15:21.987932 master-0 kubenswrapper[7926]: E0216 21:15:21.987773 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-7p9ft_openshift-kube-controller-manager-operator(7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" podUID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" Feb 16 21:15:21.987932 master-0 kubenswrapper[7926]: I0216 21:15:21.987872 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/4.log" Feb 16 21:15:21.988865 master-0 kubenswrapper[7926]: I0216 21:15:21.988828 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/3.log" Feb 16 21:15:21.988966 master-0 kubenswrapper[7926]: I0216 21:15:21.988871 7926 generic.go:334] "Generic (PLEG): container finished" podID="1b61063e-775e-421d-bf73-a6ef134293a0" containerID="aab44606d671f216ff3793ef915c84f815301082904e4bc4a12b70d23d7c13c3" exitCode=1 Feb 16 21:15:21.988966 master-0 kubenswrapper[7926]: I0216 21:15:21.988895 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerDied","Data":"aab44606d671f216ff3793ef915c84f815301082904e4bc4a12b70d23d7c13c3"} Feb 16 21:15:21.989407 master-0 kubenswrapper[7926]: I0216 21:15:21.989361 7926 scope.go:117] "RemoveContainer" containerID="aab44606d671f216ff3793ef915c84f815301082904e4bc4a12b70d23d7c13c3" Feb 16 21:15:21.989617 master-0 kubenswrapper[7926]: E0216 21:15:21.989580 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=network-operator pod=network-operator-6fcf4c966-n4hfs_openshift-network-operator(1b61063e-775e-421d-bf73-a6ef134293a0)\"" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" podUID="1b61063e-775e-421d-bf73-a6ef134293a0" Feb 16 21:15:22.021093 master-0 kubenswrapper[7926]: I0216 21:15:22.021051 7926 scope.go:117] "RemoveContainer" containerID="98437a21e834f809a7d3a2fcc7ab7ac439c7d9370d526734b7d11f63840cb92d" Feb 16 21:15:22.186688 master-0 kubenswrapper[7926]: I0216 21:15:22.186584 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:22.186688 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:22.186688 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:22.186688 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:22.186688 master-0 kubenswrapper[7926]: I0216 21:15:22.186688 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:22.739178 master-0 kubenswrapper[7926]: I0216 21:15:22.739092 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:15:22.739411 master-0 kubenswrapper[7926]: E0216 21:15:22.739335 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:15:23.004380 master-0 kubenswrapper[7926]: I0216 21:15:23.004312 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/5.log" Feb 16 21:15:23.005301 master-0 kubenswrapper[7926]: I0216 21:15:23.005229 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/4.log" Feb 16 21:15:23.005414 master-0 kubenswrapper[7926]: I0216 21:15:23.005355 7926 generic.go:334] "Generic (PLEG): container finished" podID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" containerID="220f76e0bb64fd419313cb573cd97bbb54f9d2b5998f9525c7d9045abc13cfb5" exitCode=1 Feb 16 21:15:23.005556 master-0 kubenswrapper[7926]: I0216 21:15:23.005491 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerDied","Data":"220f76e0bb64fd419313cb573cd97bbb54f9d2b5998f9525c7d9045abc13cfb5"} Feb 16 21:15:23.005896 master-0 kubenswrapper[7926]: I0216 21:15:23.005849 7926 scope.go:117] "RemoveContainer" containerID="08b199e651bbf31337e0e421513ddb4e42db3e1be0a3d07452f74ea9c1f46046" Feb 16 21:15:23.006287 master-0 kubenswrapper[7926]: I0216 21:15:23.006252 7926 scope.go:117] "RemoveContainer" containerID="220f76e0bb64fd419313cb573cd97bbb54f9d2b5998f9525c7d9045abc13cfb5" Feb 16 21:15:23.006639 master-0 kubenswrapper[7926]: E0216 21:15:23.006588 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-56v4p_openshift-kube-storage-version-migrator-operator(c7333319-3fe6-4b3f-b600-6b6df49fcaff)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" podUID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" Feb 16 21:15:23.008491 master-0 kubenswrapper[7926]: I0216 21:15:23.008455 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/4.log" Feb 16 21:15:23.011403 master-0 kubenswrapper[7926]: I0216 21:15:23.011367 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/5.log" Feb 16 21:15:23.185623 master-0 kubenswrapper[7926]: I0216 21:15:23.185522 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:23.185623 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:23.185623 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:23.185623 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:23.186262 master-0 kubenswrapper[7926]: I0216 21:15:23.185644 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:24.024197 master-0 kubenswrapper[7926]: I0216 21:15:24.024092 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/5.log" Feb 16 21:15:24.185836 master-0 kubenswrapper[7926]: I0216 21:15:24.185489 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:24.185836 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:24.185836 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:24.185836 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:24.185836 master-0 kubenswrapper[7926]: I0216 21:15:24.185621 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:25.186708 master-0 kubenswrapper[7926]: I0216 21:15:25.186608 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:25.186708 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:25.186708 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:25.186708 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:25.187932 master-0 kubenswrapper[7926]: I0216 21:15:25.187607 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:26.185192 master-0 kubenswrapper[7926]: I0216 21:15:26.185089 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:26.185192 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:26.185192 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:26.185192 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:26.185192 master-0 kubenswrapper[7926]: I0216 21:15:26.185163 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:26.739641 master-0 kubenswrapper[7926]: I0216 21:15:26.739554 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:15:26.740227 master-0 kubenswrapper[7926]: E0216 21:15:26.739916 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:15:27.047025 master-0 kubenswrapper[7926]: I0216 21:15:27.046970 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/6.log" Feb 16 21:15:27.047485 master-0 kubenswrapper[7926]: I0216 21:15:27.047455 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/5.log" Feb 16 21:15:27.047533 master-0 kubenswrapper[7926]: I0216 21:15:27.047493 7926 generic.go:334] "Generic (PLEG): container finished" podID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerID="cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba" exitCode=1 Feb 16 21:15:27.047569 master-0 kubenswrapper[7926]: I0216 21:15:27.047520 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerDied","Data":"cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba"} Feb 16 21:15:27.047601 master-0 kubenswrapper[7926]: I0216 21:15:27.047571 7926 scope.go:117] "RemoveContainer" containerID="1280026270fafbe7904a661cf88a10d4f267040cb7cc3fb07ffaa22fce0b7d32" Feb 16 21:15:27.048582 master-0 kubenswrapper[7926]: I0216 21:15:27.048547 7926 scope.go:117] "RemoveContainer" containerID="cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba" Feb 16 21:15:27.049224 master-0 kubenswrapper[7926]: E0216 21:15:27.048841 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:15:27.185722 master-0 kubenswrapper[7926]: I0216 21:15:27.185547 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:27.185722 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:27.185722 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:27.185722 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:27.185722 master-0 kubenswrapper[7926]: I0216 21:15:27.185676 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:28.065003 master-0 kubenswrapper[7926]: I0216 21:15:28.064936 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/6.log" Feb 16 21:15:28.185134 master-0 kubenswrapper[7926]: I0216 21:15:28.185048 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:28.185134 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:28.185134 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:28.185134 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:28.185534 master-0 kubenswrapper[7926]: I0216 21:15:28.185144 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:29.184460 master-0 kubenswrapper[7926]: I0216 21:15:29.184386 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:29.184460 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:29.184460 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:29.184460 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:29.185117 master-0 kubenswrapper[7926]: I0216 21:15:29.184493 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:30.185830 master-0 kubenswrapper[7926]: I0216 21:15:30.185709 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:30.185830 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:30.185830 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:30.185830 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:30.186394 master-0 kubenswrapper[7926]: I0216 21:15:30.185843 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:31.184762 master-0 kubenswrapper[7926]: I0216 21:15:31.184678 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:31.184762 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:31.184762 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:31.184762 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:31.185093 master-0 kubenswrapper[7926]: I0216 21:15:31.184769 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:31.739109 master-0 kubenswrapper[7926]: I0216 21:15:31.739016 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:15:31.740177 master-0 kubenswrapper[7926]: E0216 21:15:31.739248 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:15:32.185032 master-0 kubenswrapper[7926]: I0216 21:15:32.184964 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:32.185032 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:32.185032 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:32.185032 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:32.185032 master-0 kubenswrapper[7926]: I0216 21:15:32.185031 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:33.185445 master-0 kubenswrapper[7926]: I0216 21:15:33.185356 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:33.185445 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:33.185445 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:33.185445 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:33.185445 master-0 kubenswrapper[7926]: I0216 21:15:33.185436 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:33.738219 master-0 kubenswrapper[7926]: I0216 21:15:33.738161 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:15:33.738406 master-0 kubenswrapper[7926]: I0216 21:15:33.738246 7926 scope.go:117] "RemoveContainer" containerID="aab44606d671f216ff3793ef915c84f815301082904e4bc4a12b70d23d7c13c3" Feb 16 21:15:33.738577 master-0 kubenswrapper[7926]: E0216 21:15:33.738531 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=network-operator pod=network-operator-6fcf4c966-n4hfs_openshift-network-operator(1b61063e-775e-421d-bf73-a6ef134293a0)\"" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" podUID="1b61063e-775e-421d-bf73-a6ef134293a0" Feb 16 21:15:33.738614 master-0 kubenswrapper[7926]: E0216 21:15:33.738579 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:15:34.184630 master-0 kubenswrapper[7926]: I0216 21:15:34.184531 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:34.184630 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:34.184630 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:34.184630 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:34.184630 master-0 kubenswrapper[7926]: I0216 21:15:34.184596 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:35.185687 master-0 kubenswrapper[7926]: I0216 21:15:35.185553 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:35.185687 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:35.185687 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:35.185687 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:35.185687 master-0 kubenswrapper[7926]: I0216 21:15:35.185615 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:35.738603 master-0 kubenswrapper[7926]: I0216 21:15:35.738540 7926 scope.go:117] "RemoveContainer" containerID="220f76e0bb64fd419313cb573cd97bbb54f9d2b5998f9525c7d9045abc13cfb5" Feb 16 21:15:35.738904 master-0 kubenswrapper[7926]: E0216 21:15:35.738848 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-56v4p_openshift-kube-storage-version-migrator-operator(c7333319-3fe6-4b3f-b600-6b6df49fcaff)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" podUID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" Feb 16 21:15:36.185245 master-0 kubenswrapper[7926]: I0216 21:15:36.185181 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:36.185245 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:36.185245 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:36.185245 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:36.185538 master-0 kubenswrapper[7926]: I0216 21:15:36.185283 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:36.739232 master-0 kubenswrapper[7926]: I0216 21:15:36.739145 7926 scope.go:117] "RemoveContainer" containerID="4b9eed56cd9de27df8732f0bf589198f3bec398bab1ee5d8d5d4047198bdc2b3" Feb 16 21:15:36.739990 master-0 kubenswrapper[7926]: E0216 21:15:36.739520 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-7p9ft_openshift-kube-controller-manager-operator(7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" podUID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" Feb 16 21:15:36.859201 master-0 kubenswrapper[7926]: I0216 21:15:36.859047 7926 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:15:36.860242 master-0 kubenswrapper[7926]: I0216 21:15:36.860204 7926 scope.go:117] "RemoveContainer" containerID="cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba" Feb 16 21:15:36.860626 master-0 kubenswrapper[7926]: E0216 21:15:36.860581 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:15:37.185763 master-0 kubenswrapper[7926]: I0216 21:15:37.185605 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:37.185763 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:37.185763 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:37.185763 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:37.186196 master-0 kubenswrapper[7926]: I0216 21:15:37.185780 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:38.185931 master-0 kubenswrapper[7926]: I0216 21:15:38.185822 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:38.185931 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:38.185931 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:38.185931 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:38.187236 master-0 kubenswrapper[7926]: I0216 21:15:38.185939 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:39.185417 master-0 kubenswrapper[7926]: I0216 21:15:39.185350 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:39.185417 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:39.185417 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:39.185417 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:39.185744 master-0 kubenswrapper[7926]: I0216 21:15:39.185418 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:39.739174 master-0 kubenswrapper[7926]: I0216 21:15:39.739057 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:15:39.740188 master-0 kubenswrapper[7926]: E0216 21:15:39.739554 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:15:40.185378 master-0 kubenswrapper[7926]: I0216 21:15:40.185287 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:40.185378 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:40.185378 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:40.185378 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:40.185839 master-0 kubenswrapper[7926]: I0216 21:15:40.185385 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:41.185698 master-0 kubenswrapper[7926]: I0216 21:15:41.185560 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:41.185698 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:41.185698 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:41.185698 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:41.185698 master-0 kubenswrapper[7926]: I0216 21:15:41.185638 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:42.186380 master-0 kubenswrapper[7926]: I0216 21:15:42.186279 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:42.186380 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:42.186380 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:42.186380 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:42.186991 master-0 kubenswrapper[7926]: I0216 21:15:42.186393 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:43.186733 master-0 kubenswrapper[7926]: I0216 21:15:43.186564 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:43.186733 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:43.186733 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:43.186733 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:43.188106 master-0 kubenswrapper[7926]: I0216 21:15:43.186760 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:44.186174 master-0 kubenswrapper[7926]: I0216 21:15:44.186085 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:44.186174 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:44.186174 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:44.186174 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:44.186470 master-0 kubenswrapper[7926]: I0216 21:15:44.186200 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:44.738866 master-0 kubenswrapper[7926]: I0216 21:15:44.738803 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:15:44.739781 master-0 kubenswrapper[7926]: E0216 21:15:44.739008 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:15:45.184495 master-0 kubenswrapper[7926]: I0216 21:15:45.184441 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:45.184495 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:45.184495 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:45.184495 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:45.184495 master-0 kubenswrapper[7926]: I0216 21:15:45.184498 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:45.738609 master-0 kubenswrapper[7926]: I0216 21:15:45.738545 7926 scope.go:117] "RemoveContainer" containerID="aab44606d671f216ff3793ef915c84f815301082904e4bc4a12b70d23d7c13c3" Feb 16 21:15:45.739061 master-0 kubenswrapper[7926]: E0216 21:15:45.739013 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=network-operator pod=network-operator-6fcf4c966-n4hfs_openshift-network-operator(1b61063e-775e-421d-bf73-a6ef134293a0)\"" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" podUID="1b61063e-775e-421d-bf73-a6ef134293a0" Feb 16 21:15:46.186071 master-0 kubenswrapper[7926]: I0216 21:15:46.185955 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:46.186071 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:46.186071 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:46.186071 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:46.186714 master-0 kubenswrapper[7926]: I0216 21:15:46.186077 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:46.739210 master-0 kubenswrapper[7926]: I0216 21:15:46.739155 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:15:46.739801 master-0 kubenswrapper[7926]: I0216 21:15:46.739393 7926 scope.go:117] "RemoveContainer" containerID="220f76e0bb64fd419313cb573cd97bbb54f9d2b5998f9525c7d9045abc13cfb5" Feb 16 21:15:46.739801 master-0 kubenswrapper[7926]: E0216 21:15:46.739594 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:15:46.739801 master-0 kubenswrapper[7926]: E0216 21:15:46.739675 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-56v4p_openshift-kube-storage-version-migrator-operator(c7333319-3fe6-4b3f-b600-6b6df49fcaff)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" podUID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" Feb 16 21:15:47.185709 master-0 kubenswrapper[7926]: I0216 21:15:47.185295 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:47.185709 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:47.185709 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:47.185709 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:47.185709 master-0 kubenswrapper[7926]: I0216 21:15:47.185380 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:48.185970 master-0 kubenswrapper[7926]: I0216 21:15:48.185893 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:48.185970 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:48.185970 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:48.185970 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:48.187266 master-0 kubenswrapper[7926]: I0216 21:15:48.186846 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:48.745959 master-0 kubenswrapper[7926]: I0216 21:15:48.745845 7926 scope.go:117] "RemoveContainer" containerID="cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba" Feb 16 21:15:48.746312 master-0 kubenswrapper[7926]: E0216 21:15:48.746244 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:15:49.186614 master-0 kubenswrapper[7926]: I0216 21:15:49.186486 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:49.186614 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:49.186614 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:49.186614 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:49.186614 master-0 kubenswrapper[7926]: I0216 21:15:49.186595 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:49.738912 master-0 kubenswrapper[7926]: I0216 21:15:49.738810 7926 scope.go:117] "RemoveContainer" containerID="4b9eed56cd9de27df8732f0bf589198f3bec398bab1ee5d8d5d4047198bdc2b3" Feb 16 21:15:49.739217 master-0 kubenswrapper[7926]: E0216 21:15:49.739140 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-7p9ft_openshift-kube-controller-manager-operator(7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" podUID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" Feb 16 21:15:50.185492 master-0 kubenswrapper[7926]: I0216 21:15:50.185383 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:50.185492 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:50.185492 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:50.185492 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:50.185492 master-0 kubenswrapper[7926]: I0216 21:15:50.185458 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:51.185169 master-0 kubenswrapper[7926]: I0216 21:15:51.185131 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:51.185169 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:51.185169 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:51.185169 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:51.185789 master-0 kubenswrapper[7926]: I0216 21:15:51.185765 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:52.184529 master-0 kubenswrapper[7926]: I0216 21:15:52.184472 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:52.184529 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:52.184529 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:52.184529 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:52.184940 master-0 kubenswrapper[7926]: I0216 21:15:52.184548 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:52.249297 master-0 kubenswrapper[7926]: I0216 21:15:52.249244 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/4.log" Feb 16 21:15:52.250253 master-0 kubenswrapper[7926]: I0216 21:15:52.250228 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/3.log" Feb 16 21:15:52.250374 master-0 kubenswrapper[7926]: I0216 21:15:52.250354 7926 generic.go:334] "Generic (PLEG): container finished" podID="0b02b740-5698-4e9a-90fe-2873bd0b0958" containerID="71d2f873a3383c5d4e4ea361c9b4723201e4600cb1f7ea3ef5cecd7778b39d86" exitCode=1 Feb 16 21:15:52.250498 master-0 kubenswrapper[7926]: I0216 21:15:52.250458 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerDied","Data":"71d2f873a3383c5d4e4ea361c9b4723201e4600cb1f7ea3ef5cecd7778b39d86"} Feb 16 21:15:52.250553 master-0 kubenswrapper[7926]: I0216 21:15:52.250534 7926 scope.go:117] "RemoveContainer" containerID="9aebe89f00ace7757c9f12dc1f4359a915f84e8eb395e1cdeae0962c4475a4af" Feb 16 21:15:52.251257 master-0 kubenswrapper[7926]: I0216 21:15:52.251222 7926 scope.go:117] "RemoveContainer" containerID="71d2f873a3383c5d4e4ea361c9b4723201e4600cb1f7ea3ef5cecd7778b39d86" Feb 16 21:15:52.251502 master-0 kubenswrapper[7926]: E0216 21:15:52.251473 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-cl5ld_openshift-kube-apiserver-operator(0b02b740-5698-4e9a-90fe-2873bd0b0958)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" podUID="0b02b740-5698-4e9a-90fe-2873bd0b0958" Feb 16 21:15:52.738622 master-0 kubenswrapper[7926]: I0216 21:15:52.738584 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:15:52.739178 master-0 kubenswrapper[7926]: E0216 21:15:52.739155 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:15:53.185296 master-0 kubenswrapper[7926]: I0216 21:15:53.185233 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:53.185296 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:53.185296 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:53.185296 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:53.185843 master-0 kubenswrapper[7926]: I0216 21:15:53.185291 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:53.258155 master-0 kubenswrapper[7926]: I0216 21:15:53.258078 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/4.log" Feb 16 21:15:54.185715 master-0 kubenswrapper[7926]: I0216 21:15:54.185557 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:54.185715 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:54.185715 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:54.185715 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:54.185715 master-0 kubenswrapper[7926]: I0216 21:15:54.185652 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:55.185169 master-0 kubenswrapper[7926]: I0216 21:15:55.185087 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:55.185169 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:55.185169 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:55.185169 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:55.185169 master-0 kubenswrapper[7926]: I0216 21:15:55.185160 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:56.184421 master-0 kubenswrapper[7926]: I0216 21:15:56.184369 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:56.184421 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:56.184421 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:56.184421 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:56.184805 master-0 kubenswrapper[7926]: I0216 21:15:56.184444 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:57.186293 master-0 kubenswrapper[7926]: I0216 21:15:57.186205 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:57.186293 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:57.186293 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:57.186293 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:57.186956 master-0 kubenswrapper[7926]: I0216 21:15:57.186319 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:57.738520 master-0 kubenswrapper[7926]: I0216 21:15:57.738460 7926 scope.go:117] "RemoveContainer" containerID="aab44606d671f216ff3793ef915c84f815301082904e4bc4a12b70d23d7c13c3" Feb 16 21:15:57.739265 master-0 kubenswrapper[7926]: I0216 21:15:57.739188 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:15:57.739433 master-0 kubenswrapper[7926]: I0216 21:15:57.739382 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:15:57.739859 master-0 kubenswrapper[7926]: E0216 21:15:57.739815 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=network-operator pod=network-operator-6fcf4c966-n4hfs_openshift-network-operator(1b61063e-775e-421d-bf73-a6ef134293a0)\"" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" podUID="1b61063e-775e-421d-bf73-a6ef134293a0" Feb 16 21:15:57.740049 master-0 kubenswrapper[7926]: E0216 21:15:57.739819 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:15:57.740192 master-0 kubenswrapper[7926]: E0216 21:15:57.739819 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:15:58.023910 master-0 kubenswrapper[7926]: I0216 21:15:58.023516 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 16 21:15:58.024241 master-0 kubenswrapper[7926]: E0216 21:15:58.024018 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc3abc9-3012-43bd-af84-fc65baf82801" containerName="installer" Feb 16 21:15:58.024241 master-0 kubenswrapper[7926]: I0216 21:15:58.024039 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc3abc9-3012-43bd-af84-fc65baf82801" containerName="installer" Feb 16 21:15:58.024241 master-0 kubenswrapper[7926]: E0216 21:15:58.024087 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1677883f-bae2-4b6e-9dfe-683a6d26f2c5" containerName="installer" Feb 16 21:15:58.024241 master-0 kubenswrapper[7926]: I0216 21:15:58.024095 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="1677883f-bae2-4b6e-9dfe-683a6d26f2c5" containerName="installer" Feb 16 21:15:58.024763 master-0 kubenswrapper[7926]: I0216 21:15:58.024309 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc3abc9-3012-43bd-af84-fc65baf82801" containerName="installer" Feb 16 21:15:58.024763 master-0 kubenswrapper[7926]: I0216 21:15:58.024549 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="1677883f-bae2-4b6e-9dfe-683a6d26f2c5" containerName="installer" Feb 16 21:15:58.025379 master-0 kubenswrapper[7926]: I0216 21:15:58.025328 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:15:58.029424 master-0 kubenswrapper[7926]: I0216 21:15:58.029374 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 16 21:15:58.029734 master-0 kubenswrapper[7926]: I0216 21:15:58.029697 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-sdjl5" Feb 16 21:15:58.043485 master-0 kubenswrapper[7926]: I0216 21:15:58.043442 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 16 21:15:58.155460 master-0 kubenswrapper[7926]: I0216 21:15:58.155350 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:15:58.155460 master-0 kubenswrapper[7926]: I0216 21:15:58.155436 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:15:58.155974 master-0 kubenswrapper[7926]: I0216 21:15:58.155728 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:15:58.186106 master-0 kubenswrapper[7926]: I0216 21:15:58.186034 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:58.186106 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:58.186106 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:58.186106 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:58.186106 master-0 kubenswrapper[7926]: I0216 21:15:58.186107 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:58.257710 master-0 kubenswrapper[7926]: I0216 21:15:58.257592 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:15:58.258349 master-0 kubenswrapper[7926]: I0216 21:15:58.258316 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:15:58.258551 master-0 kubenswrapper[7926]: I0216 21:15:58.258521 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:15:58.258808 master-0 kubenswrapper[7926]: I0216 21:15:58.258391 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:15:58.259007 master-0 kubenswrapper[7926]: I0216 21:15:58.258623 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:15:58.288706 master-0 kubenswrapper[7926]: I0216 21:15:58.288535 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:15:58.343933 master-0 kubenswrapper[7926]: I0216 21:15:58.343841 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:15:58.842982 master-0 kubenswrapper[7926]: I0216 21:15:58.842887 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Feb 16 21:15:59.184526 master-0 kubenswrapper[7926]: I0216 21:15:59.184423 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:15:59.184526 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:15:59.184526 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:15:59.184526 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:15:59.184823 master-0 kubenswrapper[7926]: I0216 21:15:59.184581 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:15:59.303479 master-0 kubenswrapper[7926]: I0216 21:15:59.303438 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"2cf5e26c-84a2-45c6-b7dc-ee96dad23175","Type":"ContainerStarted","Data":"912bdb89c47c0c84a626b5915d0082c84d6ad6cfcb759d646e64bf4849456d1f"} Feb 16 21:15:59.304036 master-0 kubenswrapper[7926]: I0216 21:15:59.304018 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"2cf5e26c-84a2-45c6-b7dc-ee96dad23175","Type":"ContainerStarted","Data":"530378b0633d960adbb9dbb3d961b5d62ae93d6f5ce44d7b8788383b67a4c0a0"} Feb 16 21:15:59.327511 master-0 kubenswrapper[7926]: I0216 21:15:59.327419 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" podStartSLOduration=1.3274003429999999 podStartE2EDuration="1.327400343s" podCreationTimestamp="2026-02-16 21:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:15:59.325712278 +0000 UTC m=+1130.960612618" watchObservedRunningTime="2026-02-16 21:15:59.327400343 +0000 UTC m=+1130.962300653" Feb 16 21:15:59.738866 master-0 kubenswrapper[7926]: I0216 21:15:59.738755 7926 scope.go:117] "RemoveContainer" containerID="220f76e0bb64fd419313cb573cd97bbb54f9d2b5998f9525c7d9045abc13cfb5" Feb 16 21:15:59.739124 master-0 kubenswrapper[7926]: E0216 21:15:59.739019 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-56v4p_openshift-kube-storage-version-migrator-operator(c7333319-3fe6-4b3f-b600-6b6df49fcaff)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" podUID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" Feb 16 21:16:00.186240 master-0 kubenswrapper[7926]: I0216 21:16:00.186163 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:00.186240 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:00.186240 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:00.186240 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:00.186788 master-0 kubenswrapper[7926]: I0216 21:16:00.186282 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:01.185125 master-0 kubenswrapper[7926]: I0216 21:16:01.185049 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:01.185125 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:01.185125 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:01.185125 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:01.185125 master-0 kubenswrapper[7926]: I0216 21:16:01.185116 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:02.185067 master-0 kubenswrapper[7926]: I0216 21:16:02.185014 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:02.185067 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:02.185067 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:02.185067 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:02.185067 master-0 kubenswrapper[7926]: I0216 21:16:02.185064 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:02.739021 master-0 kubenswrapper[7926]: I0216 21:16:02.738912 7926 scope.go:117] "RemoveContainer" containerID="cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba" Feb 16 21:16:02.739414 master-0 kubenswrapper[7926]: E0216 21:16:02.739358 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:16:03.186418 master-0 kubenswrapper[7926]: I0216 21:16:03.186324 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:03.186418 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:03.186418 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:03.186418 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:03.188005 master-0 kubenswrapper[7926]: I0216 21:16:03.187944 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:03.339419 master-0 kubenswrapper[7926]: I0216 21:16:03.339359 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/5.log" Feb 16 21:16:03.341004 master-0 kubenswrapper[7926]: I0216 21:16:03.340939 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/4.log" Feb 16 21:16:03.341795 master-0 kubenswrapper[7926]: I0216 21:16:03.341710 7926 generic.go:334] "Generic (PLEG): container finished" podID="8b648d9e-a892-4951-b0e2-fed6b16273d4" containerID="6a46714853e2a885d7f0ea06667526f3f7b240b0bd635da8d5cae43fd1dadc87" exitCode=1 Feb 16 21:16:03.341951 master-0 kubenswrapper[7926]: I0216 21:16:03.341796 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerDied","Data":"6a46714853e2a885d7f0ea06667526f3f7b240b0bd635da8d5cae43fd1dadc87"} Feb 16 21:16:03.341951 master-0 kubenswrapper[7926]: I0216 21:16:03.341857 7926 scope.go:117] "RemoveContainer" containerID="6774523bbae3d7abd16dc2e39c9e808fff70ea7aaf2e57c4f294e7c707bbf785" Feb 16 21:16:03.342645 master-0 kubenswrapper[7926]: I0216 21:16:03.342569 7926 scope.go:117] "RemoveContainer" containerID="6a46714853e2a885d7f0ea06667526f3f7b240b0bd635da8d5cae43fd1dadc87" Feb 16 21:16:03.343266 master-0 kubenswrapper[7926]: E0216 21:16:03.343155 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)\"" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" podUID="8b648d9e-a892-4951-b0e2-fed6b16273d4" Feb 16 21:16:03.739493 master-0 kubenswrapper[7926]: I0216 21:16:03.739418 7926 scope.go:117] "RemoveContainer" containerID="4b9eed56cd9de27df8732f0bf589198f3bec398bab1ee5d8d5d4047198bdc2b3" Feb 16 21:16:03.740008 master-0 kubenswrapper[7926]: E0216 21:16:03.739914 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-7p9ft_openshift-kube-controller-manager-operator(7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" podUID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" Feb 16 21:16:04.185613 master-0 kubenswrapper[7926]: I0216 21:16:04.185505 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:04.185613 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:04.185613 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:04.185613 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:04.186018 master-0 kubenswrapper[7926]: I0216 21:16:04.185679 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:04.353340 master-0 kubenswrapper[7926]: I0216 21:16:04.353197 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/5.log" Feb 16 21:16:05.184472 master-0 kubenswrapper[7926]: I0216 21:16:05.184420 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:05.184472 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:05.184472 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:05.184472 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:05.184792 master-0 kubenswrapper[7926]: I0216 21:16:05.184491 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:06.185985 master-0 kubenswrapper[7926]: I0216 21:16:06.185921 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:06.185985 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:06.185985 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:06.185985 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:06.186694 master-0 kubenswrapper[7926]: I0216 21:16:06.185997 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:06.738983 master-0 kubenswrapper[7926]: I0216 21:16:06.738922 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:16:06.739255 master-0 kubenswrapper[7926]: E0216 21:16:06.739187 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:16:06.740256 master-0 kubenswrapper[7926]: I0216 21:16:06.740211 7926 scope.go:117] "RemoveContainer" containerID="71d2f873a3383c5d4e4ea361c9b4723201e4600cb1f7ea3ef5cecd7778b39d86" Feb 16 21:16:06.740535 master-0 kubenswrapper[7926]: E0216 21:16:06.740499 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-cl5ld_openshift-kube-apiserver-operator(0b02b740-5698-4e9a-90fe-2873bd0b0958)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" podUID="0b02b740-5698-4e9a-90fe-2873bd0b0958" Feb 16 21:16:07.186445 master-0 kubenswrapper[7926]: I0216 21:16:07.186362 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:07.186445 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:07.186445 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:07.186445 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:07.186445 master-0 kubenswrapper[7926]: I0216 21:16:07.186435 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:08.186353 master-0 kubenswrapper[7926]: I0216 21:16:08.186275 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:08.186353 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:08.186353 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:08.186353 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:08.187326 master-0 kubenswrapper[7926]: I0216 21:16:08.186360 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:08.740819 master-0 kubenswrapper[7926]: I0216 21:16:08.740715 7926 scope.go:117] "RemoveContainer" containerID="aab44606d671f216ff3793ef915c84f815301082904e4bc4a12b70d23d7c13c3" Feb 16 21:16:08.741523 master-0 kubenswrapper[7926]: E0216 21:16:08.740938 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=network-operator pod=network-operator-6fcf4c966-n4hfs_openshift-network-operator(1b61063e-775e-421d-bf73-a6ef134293a0)\"" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" podUID="1b61063e-775e-421d-bf73-a6ef134293a0" Feb 16 21:16:09.185792 master-0 kubenswrapper[7926]: I0216 21:16:09.185720 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:09.185792 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:09.185792 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:09.185792 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:09.186189 master-0 kubenswrapper[7926]: I0216 21:16:09.185827 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:09.739484 master-0 kubenswrapper[7926]: I0216 21:16:09.739415 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:16:09.740399 master-0 kubenswrapper[7926]: I0216 21:16:09.739586 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:16:09.740399 master-0 kubenswrapper[7926]: E0216 21:16:09.739731 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:16:09.740399 master-0 kubenswrapper[7926]: E0216 21:16:09.739950 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:16:10.185905 master-0 kubenswrapper[7926]: I0216 21:16:10.185783 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:10.185905 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:10.185905 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:10.185905 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:10.186420 master-0 kubenswrapper[7926]: I0216 21:16:10.185939 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:10.739706 master-0 kubenswrapper[7926]: I0216 21:16:10.739585 7926 scope.go:117] "RemoveContainer" containerID="220f76e0bb64fd419313cb573cd97bbb54f9d2b5998f9525c7d9045abc13cfb5" Feb 16 21:16:10.741309 master-0 kubenswrapper[7926]: E0216 21:16:10.741203 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-56v4p_openshift-kube-storage-version-migrator-operator(c7333319-3fe6-4b3f-b600-6b6df49fcaff)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" podUID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" Feb 16 21:16:11.185612 master-0 kubenswrapper[7926]: I0216 21:16:11.185543 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:11.185612 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:11.185612 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:11.185612 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:11.186310 master-0 kubenswrapper[7926]: I0216 21:16:11.185637 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:12.187067 master-0 kubenswrapper[7926]: I0216 21:16:12.186983 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:12.187067 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:12.187067 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:12.187067 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:12.188032 master-0 kubenswrapper[7926]: I0216 21:16:12.187080 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:13.187143 master-0 kubenswrapper[7926]: I0216 21:16:13.187044 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:13.187143 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:13.187143 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:13.187143 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:13.188186 master-0 kubenswrapper[7926]: I0216 21:16:13.187154 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:13.739139 master-0 kubenswrapper[7926]: I0216 21:16:13.739063 7926 scope.go:117] "RemoveContainer" containerID="cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba" Feb 16 21:16:13.739467 master-0 kubenswrapper[7926]: E0216 21:16:13.739316 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:16:14.184709 master-0 kubenswrapper[7926]: I0216 21:16:14.184621 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:14.184709 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:14.184709 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:14.184709 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:14.185115 master-0 kubenswrapper[7926]: I0216 21:16:14.184711 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:14.738787 master-0 kubenswrapper[7926]: I0216 21:16:14.738754 7926 scope.go:117] "RemoveContainer" containerID="4b9eed56cd9de27df8732f0bf589198f3bec398bab1ee5d8d5d4047198bdc2b3" Feb 16 21:16:14.739483 master-0 kubenswrapper[7926]: E0216 21:16:14.739459 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-7p9ft_openshift-kube-controller-manager-operator(7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" podUID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" Feb 16 21:16:15.185122 master-0 kubenswrapper[7926]: I0216 21:16:15.185029 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:15.185122 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:15.185122 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:15.185122 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:15.185420 master-0 kubenswrapper[7926]: I0216 21:16:15.185139 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:15.739072 master-0 kubenswrapper[7926]: I0216 21:16:15.738997 7926 scope.go:117] "RemoveContainer" containerID="6a46714853e2a885d7f0ea06667526f3f7b240b0bd635da8d5cae43fd1dadc87" Feb 16 21:16:16.186303 master-0 kubenswrapper[7926]: I0216 21:16:16.186188 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:16.186303 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:16.186303 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:16.186303 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:16.186629 master-0 kubenswrapper[7926]: I0216 21:16:16.186338 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:16.446737 master-0 kubenswrapper[7926]: I0216 21:16:16.446404 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/5.log" Feb 16 21:16:16.447069 master-0 kubenswrapper[7926]: I0216 21:16:16.447014 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"127d340a22fe8099cebc2264bacf3eeab221a7653bb8d4c8d30630cf81318a3f"} Feb 16 21:16:17.185425 master-0 kubenswrapper[7926]: I0216 21:16:17.185321 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:17.185425 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:17.185425 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:17.185425 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:17.185425 master-0 kubenswrapper[7926]: I0216 21:16:17.185406 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:18.185206 master-0 kubenswrapper[7926]: I0216 21:16:18.185111 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:18.185206 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:18.185206 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:18.185206 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:18.185206 master-0 kubenswrapper[7926]: I0216 21:16:18.185205 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:19.185227 master-0 kubenswrapper[7926]: I0216 21:16:19.185121 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:19.185227 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:19.185227 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:19.185227 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:19.186335 master-0 kubenswrapper[7926]: I0216 21:16:19.185227 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:20.185018 master-0 kubenswrapper[7926]: I0216 21:16:20.184925 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:20.185018 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:20.185018 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:20.185018 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:20.186247 master-0 kubenswrapper[7926]: I0216 21:16:20.185022 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:20.411702 master-0 kubenswrapper[7926]: E0216 21:16:20.411584 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-ingress-canary/ingress-canary-l44qd" podUID="0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b" Feb 16 21:16:20.485303 master-0 kubenswrapper[7926]: I0216 21:16:20.485113 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:16:20.738415 master-0 kubenswrapper[7926]: I0216 21:16:20.738286 7926 scope.go:117] "RemoveContainer" containerID="71d2f873a3383c5d4e4ea361c9b4723201e4600cb1f7ea3ef5cecd7778b39d86" Feb 16 21:16:20.738606 master-0 kubenswrapper[7926]: E0216 21:16:20.738476 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-cl5ld_openshift-kube-apiserver-operator(0b02b740-5698-4e9a-90fe-2873bd0b0958)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" podUID="0b02b740-5698-4e9a-90fe-2873bd0b0958" Feb 16 21:16:20.738911 master-0 kubenswrapper[7926]: I0216 21:16:20.738853 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:16:20.739430 master-0 kubenswrapper[7926]: E0216 21:16:20.739370 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:16:21.185571 master-0 kubenswrapper[7926]: I0216 21:16:21.185493 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:21.185571 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:21.185571 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:21.185571 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:21.186306 master-0 kubenswrapper[7926]: I0216 21:16:21.185576 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:21.739603 master-0 kubenswrapper[7926]: I0216 21:16:21.739518 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:16:21.740085 master-0 kubenswrapper[7926]: E0216 21:16:21.740018 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:16:22.184571 master-0 kubenswrapper[7926]: I0216 21:16:22.184512 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:22.184571 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:22.184571 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:22.184571 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:22.184861 master-0 kubenswrapper[7926]: I0216 21:16:22.184580 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:22.738905 master-0 kubenswrapper[7926]: I0216 21:16:22.738790 7926 scope.go:117] "RemoveContainer" containerID="aab44606d671f216ff3793ef915c84f815301082904e4bc4a12b70d23d7c13c3" Feb 16 21:16:22.738905 master-0 kubenswrapper[7926]: I0216 21:16:22.738886 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:16:22.739876 master-0 kubenswrapper[7926]: E0216 21:16:22.739179 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:16:22.739876 master-0 kubenswrapper[7926]: I0216 21:16:22.739372 7926 scope.go:117] "RemoveContainer" containerID="220f76e0bb64fd419313cb573cd97bbb54f9d2b5998f9525c7d9045abc13cfb5" Feb 16 21:16:22.828927 master-0 kubenswrapper[7926]: I0216 21:16:22.828840 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:16:22.835497 master-0 kubenswrapper[7926]: I0216 21:16:22.835435 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:16:22.889153 master-0 kubenswrapper[7926]: I0216 21:16:22.889094 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-gpdzh" Feb 16 21:16:22.897220 master-0 kubenswrapper[7926]: I0216 21:16:22.897154 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:16:23.185134 master-0 kubenswrapper[7926]: I0216 21:16:23.185066 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:23.185134 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:23.185134 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:23.185134 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:23.185134 master-0 kubenswrapper[7926]: I0216 21:16:23.185122 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:23.326939 master-0 kubenswrapper[7926]: I0216 21:16:23.326881 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l44qd"] Feb 16 21:16:23.508622 master-0 kubenswrapper[7926]: I0216 21:16:23.508569 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/4.log" Feb 16 21:16:23.508887 master-0 kubenswrapper[7926]: I0216 21:16:23.508754 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerStarted","Data":"8fc2cca192f72b63cdb1729b01edf727b51348c41be1bedd0f2a185d025ba61f"} Feb 16 21:16:23.516692 master-0 kubenswrapper[7926]: I0216 21:16:23.516611 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/5.log" Feb 16 21:16:23.516869 master-0 kubenswrapper[7926]: I0216 21:16:23.516737 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerStarted","Data":"d5f7b3fcfb5c9f94add7386a8d0fa1915b7e46a3ef046408fb3358fa3cd8f9a5"} Feb 16 21:16:23.520188 master-0 kubenswrapper[7926]: I0216 21:16:23.520126 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l44qd" event={"ID":"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b","Type":"ContainerStarted","Data":"8d552db0837fc540893f8ec713b54b574ad04cadc36ab9823266c8e56b9e7a86"} Feb 16 21:16:23.520348 master-0 kubenswrapper[7926]: I0216 21:16:23.520215 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l44qd" event={"ID":"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b","Type":"ContainerStarted","Data":"9b7b734a04c19ca82d24b6113d7260320b0a9c95bbc6375cd7e4100f7054eb3f"} Feb 16 21:16:23.557018 master-0 kubenswrapper[7926]: I0216 21:16:23.556928 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-l44qd" podStartSLOduration=373.556908106 podStartE2EDuration="6m13.556908106s" podCreationTimestamp="2026-02-16 21:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:23.556141195 +0000 UTC m=+1155.191041495" watchObservedRunningTime="2026-02-16 21:16:23.556908106 +0000 UTC m=+1155.191808406" Feb 16 21:16:24.186192 master-0 kubenswrapper[7926]: I0216 21:16:24.186068 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:24.186192 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:24.186192 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:24.186192 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:24.186742 master-0 kubenswrapper[7926]: I0216 21:16:24.186235 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:24.738986 master-0 kubenswrapper[7926]: I0216 21:16:24.738902 7926 scope.go:117] "RemoveContainer" containerID="cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba" Feb 16 21:16:24.739276 master-0 kubenswrapper[7926]: E0216 21:16:24.739146 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:16:25.184285 master-0 kubenswrapper[7926]: I0216 21:16:25.184220 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:25.184285 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:25.184285 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:25.184285 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:25.184285 master-0 kubenswrapper[7926]: I0216 21:16:25.184280 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:26.185137 master-0 kubenswrapper[7926]: I0216 21:16:26.185070 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:26.185137 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:26.185137 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:26.185137 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:26.185837 master-0 kubenswrapper[7926]: I0216 21:16:26.185145 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:27.185351 master-0 kubenswrapper[7926]: I0216 21:16:27.185247 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:27.185351 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:27.185351 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:27.185351 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:27.186519 master-0 kubenswrapper[7926]: I0216 21:16:27.185385 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:28.185089 master-0 kubenswrapper[7926]: I0216 21:16:28.185031 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:28.185089 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:28.185089 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:28.185089 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:28.185089 master-0 kubenswrapper[7926]: I0216 21:16:28.185089 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:28.747358 master-0 kubenswrapper[7926]: I0216 21:16:28.747232 7926 scope.go:117] "RemoveContainer" containerID="4b9eed56cd9de27df8732f0bf589198f3bec398bab1ee5d8d5d4047198bdc2b3" Feb 16 21:16:29.185422 master-0 kubenswrapper[7926]: I0216 21:16:29.185377 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:29.185422 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:29.185422 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:29.185422 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:29.185996 master-0 kubenswrapper[7926]: I0216 21:16:29.185452 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:29.566165 master-0 kubenswrapper[7926]: I0216 21:16:29.565492 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/5.log" Feb 16 21:16:29.566165 master-0 kubenswrapper[7926]: I0216 21:16:29.565546 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerStarted","Data":"f967af0fcd187eeafd04691b96ae014e22fb86716fe0ba66d9ce5f55dd5c8b91"} Feb 16 21:16:30.184551 master-0 kubenswrapper[7926]: I0216 21:16:30.184472 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:30.184551 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:30.184551 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:30.184551 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:30.184551 master-0 kubenswrapper[7926]: I0216 21:16:30.184541 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:30.609784 master-0 kubenswrapper[7926]: I0216 21:16:30.609705 7926 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 21:16:30.610317 master-0 kubenswrapper[7926]: I0216 21:16:30.610042 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" containerID="cri-o://16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547" gracePeriod=30 Feb 16 21:16:30.610862 master-0 kubenswrapper[7926]: I0216 21:16:30.610754 7926 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 21:16:30.611128 master-0 kubenswrapper[7926]: E0216 21:16:30.611084 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 21:16:30.611128 master-0 kubenswrapper[7926]: I0216 21:16:30.611116 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 21:16:30.611231 master-0 kubenswrapper[7926]: E0216 21:16:30.611135 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 21:16:30.611231 master-0 kubenswrapper[7926]: I0216 21:16:30.611150 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 21:16:30.611231 master-0 kubenswrapper[7926]: E0216 21:16:30.611176 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 21:16:30.611231 master-0 kubenswrapper[7926]: I0216 21:16:30.611189 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 21:16:30.611492 master-0 kubenswrapper[7926]: I0216 21:16:30.611459 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 21:16:30.611533 master-0 kubenswrapper[7926]: I0216 21:16:30.611516 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 21:16:30.611966 master-0 kubenswrapper[7926]: I0216 21:16:30.611935 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 21:16:30.613986 master-0 kubenswrapper[7926]: I0216 21:16:30.613919 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:16:30.736351 master-0 kubenswrapper[7926]: I0216 21:16:30.736285 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 21:16:30.780668 master-0 kubenswrapper[7926]: I0216 21:16:30.780544 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 21:16:30.806845 master-0 kubenswrapper[7926]: I0216 21:16:30.806783 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:16:30.807042 master-0 kubenswrapper[7926]: I0216 21:16:30.806897 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:16:30.908536 master-0 kubenswrapper[7926]: I0216 21:16:30.908403 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"9460ca0802075a8a6a10d7b3e6052c4d\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " Feb 16 21:16:30.908536 master-0 kubenswrapper[7926]: I0216 21:16:30.908474 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"9460ca0802075a8a6a10d7b3e6052c4d\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " Feb 16 21:16:30.908536 master-0 kubenswrapper[7926]: I0216 21:16:30.908493 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets" (OuterVolumeSpecName: "secrets") pod "9460ca0802075a8a6a10d7b3e6052c4d" (UID: "9460ca0802075a8a6a10d7b3e6052c4d"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:16:30.908840 master-0 kubenswrapper[7926]: I0216 21:16:30.908607 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs" (OuterVolumeSpecName: "logs") pod "9460ca0802075a8a6a10d7b3e6052c4d" (UID: "9460ca0802075a8a6a10d7b3e6052c4d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:16:30.908882 master-0 kubenswrapper[7926]: I0216 21:16:30.908859 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:16:30.908972 master-0 kubenswrapper[7926]: I0216 21:16:30.908936 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:16:30.909032 master-0 kubenswrapper[7926]: I0216 21:16:30.909004 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:16:30.909087 master-0 kubenswrapper[7926]: I0216 21:16:30.909061 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:16:30.909121 master-0 kubenswrapper[7926]: I0216 21:16:30.909073 7926 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") on node \"master-0\" DevicePath \"\"" Feb 16 21:16:30.909158 master-0 kubenswrapper[7926]: I0216 21:16:30.909119 7926 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:16:31.034237 master-0 kubenswrapper[7926]: I0216 21:16:31.034137 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:16:31.063256 master-0 kubenswrapper[7926]: W0216 21:16:31.063194 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8fa563c7331931f00ce0006e522f0f1.slice/crio-401dbdafe44d87ba9ccf2adf090a2c537b4f84058eb049f0f6795c6752a1a8d0 WatchSource:0}: Error finding container 401dbdafe44d87ba9ccf2adf090a2c537b4f84058eb049f0f6795c6752a1a8d0: Status 404 returned error can't find the container with id 401dbdafe44d87ba9ccf2adf090a2c537b4f84058eb049f0f6795c6752a1a8d0 Feb 16 21:16:31.186337 master-0 kubenswrapper[7926]: I0216 21:16:31.186247 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:31.186337 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:31.186337 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:31.186337 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:31.186501 master-0 kubenswrapper[7926]: I0216 21:16:31.186368 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:31.576992 master-0 kubenswrapper[7926]: I0216 21:16:31.576942 7926 generic.go:334] "Generic (PLEG): container finished" podID="2cf5e26c-84a2-45c6-b7dc-ee96dad23175" containerID="912bdb89c47c0c84a626b5915d0082c84d6ad6cfcb759d646e64bf4849456d1f" exitCode=0 Feb 16 21:16:31.577440 master-0 kubenswrapper[7926]: I0216 21:16:31.577379 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"2cf5e26c-84a2-45c6-b7dc-ee96dad23175","Type":"ContainerDied","Data":"912bdb89c47c0c84a626b5915d0082c84d6ad6cfcb759d646e64bf4849456d1f"} Feb 16 21:16:31.578679 master-0 kubenswrapper[7926]: I0216 21:16:31.578617 7926 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="432794b20c117ef5563701790110e26447eca7921c053c44497fb8bd396c6901" exitCode=0 Feb 16 21:16:31.578870 master-0 kubenswrapper[7926]: I0216 21:16:31.578855 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"432794b20c117ef5563701790110e26447eca7921c053c44497fb8bd396c6901"} Feb 16 21:16:31.579007 master-0 kubenswrapper[7926]: I0216 21:16:31.578993 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"401dbdafe44d87ba9ccf2adf090a2c537b4f84058eb049f0f6795c6752a1a8d0"} Feb 16 21:16:31.581168 master-0 kubenswrapper[7926]: I0216 21:16:31.581132 7926 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547" exitCode=0 Feb 16 21:16:31.581257 master-0 kubenswrapper[7926]: I0216 21:16:31.581196 7926 scope.go:117] "RemoveContainer" containerID="16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547" Feb 16 21:16:31.581321 master-0 kubenswrapper[7926]: I0216 21:16:31.581308 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 21:16:31.608541 master-0 kubenswrapper[7926]: I0216 21:16:31.608050 7926 scope.go:117] "RemoveContainer" containerID="a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e" Feb 16 21:16:31.644636 master-0 kubenswrapper[7926]: I0216 21:16:31.644329 7926 scope.go:117] "RemoveContainer" containerID="16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547" Feb 16 21:16:31.644974 master-0 kubenswrapper[7926]: E0216 21:16:31.644812 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547\": container with ID starting with 16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547 not found: ID does not exist" containerID="16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547" Feb 16 21:16:31.644974 master-0 kubenswrapper[7926]: I0216 21:16:31.644843 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547"} err="failed to get container status \"16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547\": rpc error: code = NotFound desc = could not find container \"16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547\": container with ID starting with 16f68b9d2d936745ee39377c29765fd45b722575ceb5d39a9c83e458b48f4547 not found: ID does not exist" Feb 16 21:16:31.644974 master-0 kubenswrapper[7926]: I0216 21:16:31.644861 7926 scope.go:117] "RemoveContainer" containerID="a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e" Feb 16 21:16:31.645220 master-0 kubenswrapper[7926]: E0216 21:16:31.645170 7926 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e\": container with ID starting with a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e not found: ID does not exist" containerID="a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e" Feb 16 21:16:31.645282 master-0 kubenswrapper[7926]: I0216 21:16:31.645227 7926 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e"} err="failed to get container status \"a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e\": rpc error: code = NotFound desc = could not find container \"a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e\": container with ID starting with a4951420ea2a6ae5237e8e58e639f3add1c70cf81012c329517f161ec6dde67e not found: ID does not exist" Feb 16 21:16:31.738772 master-0 kubenswrapper[7926]: I0216 21:16:31.738744 7926 scope.go:117] "RemoveContainer" containerID="71d2f873a3383c5d4e4ea361c9b4723201e4600cb1f7ea3ef5cecd7778b39d86" Feb 16 21:16:32.185049 master-0 kubenswrapper[7926]: I0216 21:16:32.184995 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:32.185049 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:32.185049 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:32.185049 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:32.185345 master-0 kubenswrapper[7926]: I0216 21:16:32.185055 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:32.589739 master-0 kubenswrapper[7926]: I0216 21:16:32.589695 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/4.log" Feb 16 21:16:32.589953 master-0 kubenswrapper[7926]: I0216 21:16:32.589771 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerStarted","Data":"9a4fbebd80c93d723f4b6793cf7b0ccb622b9b9b4616c52e1479c9e9afb211d0"} Feb 16 21:16:32.593071 master-0 kubenswrapper[7926]: I0216 21:16:32.593040 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"3ebe9b7d8ce03b2c6ab5c8d3215470f47595c89ae74952d5865ce15e1874a8ee"} Feb 16 21:16:32.593071 master-0 kubenswrapper[7926]: I0216 21:16:32.593071 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"d608a5d9652a3c6ba32e1dcd56710fee04c37ee22144db45ecd5fe5c524c9a31"} Feb 16 21:16:32.593168 master-0 kubenswrapper[7926]: I0216 21:16:32.593082 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"6435ebb5f02081a9ce4ce936a293eb7bb3bd2de40c50e78a8a1e337141307f75"} Feb 16 21:16:32.593168 master-0 kubenswrapper[7926]: I0216 21:16:32.593094 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:16:32.633685 master-0 kubenswrapper[7926]: I0216 21:16:32.630412 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.630316636 podStartE2EDuration="2.630316636s" podCreationTimestamp="2026-02-16 21:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:32.628406433 +0000 UTC m=+1164.263306733" watchObservedRunningTime="2026-02-16 21:16:32.630316636 +0000 UTC m=+1164.265216926" Feb 16 21:16:32.738275 master-0 kubenswrapper[7926]: I0216 21:16:32.738192 7926 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:16:32.738275 master-0 kubenswrapper[7926]: I0216 21:16:32.738237 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:16:32.738845 master-0 kubenswrapper[7926]: I0216 21:16:32.738507 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:16:32.738845 master-0 kubenswrapper[7926]: E0216 21:16:32.738719 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:16:32.738919 master-0 kubenswrapper[7926]: I0216 21:16:32.738892 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:16:32.739165 master-0 kubenswrapper[7926]: E0216 21:16:32.739130 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:16:32.750719 master-0 kubenswrapper[7926]: I0216 21:16:32.750641 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9460ca0802075a8a6a10d7b3e6052c4d" path="/var/lib/kubelet/pods/9460ca0802075a8a6a10d7b3e6052c4d/volumes" Feb 16 21:16:32.751075 master-0 kubenswrapper[7926]: I0216 21:16:32.751054 7926 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 21:16:32.763523 master-0 kubenswrapper[7926]: I0216 21:16:32.763461 7926 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Feb 16 21:16:32.774926 master-0 kubenswrapper[7926]: I0216 21:16:32.774864 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 16 21:16:32.774926 master-0 kubenswrapper[7926]: I0216 21:16:32.774914 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 16 21:16:32.775545 master-0 kubenswrapper[7926]: I0216 21:16:32.775491 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 21:16:32.775545 master-0 kubenswrapper[7926]: I0216 21:16:32.775541 7926 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="91c0d96b-3812-4dfc-af7f-749b4cccc314" Feb 16 21:16:32.781309 master-0 kubenswrapper[7926]: I0216 21:16:32.781268 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 21:16:32.781309 master-0 kubenswrapper[7926]: I0216 21:16:32.781298 7926 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="91c0d96b-3812-4dfc-af7f-749b4cccc314" Feb 16 21:16:32.784577 master-0 kubenswrapper[7926]: I0216 21:16:32.784532 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 16 21:16:32.854844 master-0 kubenswrapper[7926]: I0216 21:16:32.854737 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:16:32.904331 master-0 kubenswrapper[7926]: I0216 21:16:32.904245 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.904225467 podStartE2EDuration="904.225467ms" podCreationTimestamp="2026-02-16 21:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:16:32.898751708 +0000 UTC m=+1164.533652018" watchObservedRunningTime="2026-02-16 21:16:32.904225467 +0000 UTC m=+1164.539125767" Feb 16 21:16:33.042018 master-0 kubenswrapper[7926]: I0216 21:16:33.041969 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kubelet-dir\") pod \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " Feb 16 21:16:33.042211 master-0 kubenswrapper[7926]: I0216 21:16:33.042061 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kube-api-access\") pod \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " Feb 16 21:16:33.042211 master-0 kubenswrapper[7926]: I0216 21:16:33.042114 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2cf5e26c-84a2-45c6-b7dc-ee96dad23175" (UID: "2cf5e26c-84a2-45c6-b7dc-ee96dad23175"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:16:33.042278 master-0 kubenswrapper[7926]: I0216 21:16:33.042225 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-var-lock\") pod \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\" (UID: \"2cf5e26c-84a2-45c6-b7dc-ee96dad23175\") " Feb 16 21:16:33.042390 master-0 kubenswrapper[7926]: I0216 21:16:33.042353 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-var-lock" (OuterVolumeSpecName: "var-lock") pod "2cf5e26c-84a2-45c6-b7dc-ee96dad23175" (UID: "2cf5e26c-84a2-45c6-b7dc-ee96dad23175"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:16:33.042610 master-0 kubenswrapper[7926]: I0216 21:16:33.042580 7926 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:16:33.042674 master-0 kubenswrapper[7926]: I0216 21:16:33.042618 7926 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:16:33.045184 master-0 kubenswrapper[7926]: I0216 21:16:33.044897 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2cf5e26c-84a2-45c6-b7dc-ee96dad23175" (UID: "2cf5e26c-84a2-45c6-b7dc-ee96dad23175"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:16:33.144460 master-0 kubenswrapper[7926]: I0216 21:16:33.144307 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cf5e26c-84a2-45c6-b7dc-ee96dad23175-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:16:33.185980 master-0 kubenswrapper[7926]: I0216 21:16:33.185932 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:33.185980 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:33.185980 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:33.185980 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:33.186515 master-0 kubenswrapper[7926]: I0216 21:16:33.185994 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:33.600010 master-0 kubenswrapper[7926]: I0216 21:16:33.599960 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:16:33.601088 master-0 kubenswrapper[7926]: I0216 21:16:33.601038 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"2cf5e26c-84a2-45c6-b7dc-ee96dad23175","Type":"ContainerDied","Data":"530378b0633d960adbb9dbb3d961b5d62ae93d6f5ce44d7b8788383b67a4c0a0"} Feb 16 21:16:33.601142 master-0 kubenswrapper[7926]: I0216 21:16:33.601096 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="530378b0633d960adbb9dbb3d961b5d62ae93d6f5ce44d7b8788383b67a4c0a0" Feb 16 21:16:33.601557 master-0 kubenswrapper[7926]: I0216 21:16:33.601510 7926 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:16:33.601557 master-0 kubenswrapper[7926]: I0216 21:16:33.601552 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="dc5b0952-527e-40f6-84fa-362aa0d5b6f8" Feb 16 21:16:34.185451 master-0 kubenswrapper[7926]: I0216 21:16:34.185363 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:34.185451 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:34.185451 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:34.185451 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:34.186203 master-0 kubenswrapper[7926]: I0216 21:16:34.185466 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:35.184847 master-0 kubenswrapper[7926]: I0216 21:16:35.184775 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:35.184847 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:35.184847 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:35.184847 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:35.184847 master-0 kubenswrapper[7926]: I0216 21:16:35.184832 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:35.738710 master-0 kubenswrapper[7926]: I0216 21:16:35.738614 7926 scope.go:117] "RemoveContainer" containerID="cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba" Feb 16 21:16:35.739738 master-0 kubenswrapper[7926]: E0216 21:16:35.738838 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:16:36.185645 master-0 kubenswrapper[7926]: I0216 21:16:36.185595 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:36.185645 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:36.185645 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:36.185645 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:36.186077 master-0 kubenswrapper[7926]: I0216 21:16:36.185701 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:37.185247 master-0 kubenswrapper[7926]: I0216 21:16:37.185168 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:37.185247 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:37.185247 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:37.185247 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:37.185247 master-0 kubenswrapper[7926]: I0216 21:16:37.185224 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:37.738320 master-0 kubenswrapper[7926]: I0216 21:16:37.738272 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:16:37.738549 master-0 kubenswrapper[7926]: E0216 21:16:37.738502 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:16:38.185187 master-0 kubenswrapper[7926]: I0216 21:16:38.185102 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:38.185187 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:38.185187 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:38.185187 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:38.185187 master-0 kubenswrapper[7926]: I0216 21:16:38.185180 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:39.185471 master-0 kubenswrapper[7926]: I0216 21:16:39.185389 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:39.185471 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:39.185471 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:39.185471 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:39.186480 master-0 kubenswrapper[7926]: I0216 21:16:39.185489 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:40.184448 master-0 kubenswrapper[7926]: I0216 21:16:40.184344 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:40.184448 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:40.184448 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:40.184448 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:40.184995 master-0 kubenswrapper[7926]: I0216 21:16:40.184455 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:41.185346 master-0 kubenswrapper[7926]: I0216 21:16:41.185280 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:41.185346 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:41.185346 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:41.185346 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:41.185346 master-0 kubenswrapper[7926]: I0216 21:16:41.185345 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:42.186423 master-0 kubenswrapper[7926]: I0216 21:16:42.186318 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:42.186423 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:42.186423 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:42.186423 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:42.187461 master-0 kubenswrapper[7926]: I0216 21:16:42.186441 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:43.185580 master-0 kubenswrapper[7926]: I0216 21:16:43.185518 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:43.185580 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:43.185580 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:43.185580 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:43.185957 master-0 kubenswrapper[7926]: I0216 21:16:43.185605 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:43.739480 master-0 kubenswrapper[7926]: I0216 21:16:43.739398 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:16:43.740377 master-0 kubenswrapper[7926]: E0216 21:16:43.739855 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:16:44.185858 master-0 kubenswrapper[7926]: I0216 21:16:44.185757 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:44.185858 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:44.185858 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:44.185858 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:44.186288 master-0 kubenswrapper[7926]: I0216 21:16:44.185879 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:44.738776 master-0 kubenswrapper[7926]: I0216 21:16:44.738699 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:16:44.739191 master-0 kubenswrapper[7926]: E0216 21:16:44.738987 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:16:45.185248 master-0 kubenswrapper[7926]: I0216 21:16:45.185162 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:45.185248 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:45.185248 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:45.185248 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:45.185248 master-0 kubenswrapper[7926]: I0216 21:16:45.185241 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:45.428925 master-0 kubenswrapper[7926]: I0216 21:16:45.428804 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:16:45.433570 master-0 kubenswrapper[7926]: I0216 21:16:45.433525 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:16:45.630573 master-0 kubenswrapper[7926]: I0216 21:16:45.630479 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-pt7pr" Feb 16 21:16:45.638827 master-0 kubenswrapper[7926]: I0216 21:16:45.638740 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:16:46.110692 master-0 kubenswrapper[7926]: I0216 21:16:46.110613 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-9xc4n"] Feb 16 21:16:46.122276 master-0 kubenswrapper[7926]: W0216 21:16:46.122165 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0b7a368_1408_4fc3_ae25_4613b74e7fca.slice/crio-a99765f7253d989ecd2ebab9422f8bd50f36c587e8b7eca1057d0e88a540b814 WatchSource:0}: Error finding container a99765f7253d989ecd2ebab9422f8bd50f36c587e8b7eca1057d0e88a540b814: Status 404 returned error can't find the container with id a99765f7253d989ecd2ebab9422f8bd50f36c587e8b7eca1057d0e88a540b814 Feb 16 21:16:46.125813 master-0 kubenswrapper[7926]: I0216 21:16:46.125752 7926 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:16:46.186961 master-0 kubenswrapper[7926]: I0216 21:16:46.186881 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:46.186961 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:46.186961 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:46.186961 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:46.187566 master-0 kubenswrapper[7926]: I0216 21:16:46.186990 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:46.694762 master-0 kubenswrapper[7926]: I0216 21:16:46.694689 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" event={"ID":"a0b7a368-1408-4fc3-ae25-4613b74e7fca","Type":"ContainerStarted","Data":"a99765f7253d989ecd2ebab9422f8bd50f36c587e8b7eca1057d0e88a540b814"} Feb 16 21:16:47.186405 master-0 kubenswrapper[7926]: I0216 21:16:47.186322 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:16:47.186405 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:16:47.186405 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:16:47.186405 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:16:47.186405 master-0 kubenswrapper[7926]: I0216 21:16:47.186395 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:16:47.186860 master-0 kubenswrapper[7926]: I0216 21:16:47.186451 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:16:47.187132 master-0 kubenswrapper[7926]: I0216 21:16:47.187094 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"998a9ae2beb3b1a75e1664da2f38a4c4498101aa5035a2ceca565eb8eafef20a"} pod="openshift-ingress/router-default-864ddd5f56-z4bnk" containerMessage="Container router failed startup probe, will be restarted" Feb 16 21:16:47.187708 master-0 kubenswrapper[7926]: I0216 21:16:47.187134 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" containerID="cri-o://998a9ae2beb3b1a75e1664da2f38a4c4498101aa5035a2ceca565eb8eafef20a" gracePeriod=3600 Feb 16 21:16:47.738733 master-0 kubenswrapper[7926]: I0216 21:16:47.738690 7926 scope.go:117] "RemoveContainer" containerID="cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba" Feb 16 21:16:47.739279 master-0 kubenswrapper[7926]: E0216 21:16:47.739243 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=authentication-operator pod=authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)\"" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" podUID="27c20f63-9bfb-4703-94d5-0c65475e08d1" Feb 16 21:16:48.717703 master-0 kubenswrapper[7926]: I0216 21:16:48.717531 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" event={"ID":"a0b7a368-1408-4fc3-ae25-4613b74e7fca","Type":"ContainerStarted","Data":"bdd8652a441643f0683ae4b00f3e1deedc584be862f8396218f05b664f2dabba"} Feb 16 21:16:48.717703 master-0 kubenswrapper[7926]: I0216 21:16:48.717640 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" event={"ID":"a0b7a368-1408-4fc3-ae25-4613b74e7fca","Type":"ContainerStarted","Data":"0072cc6faa68db02c6729fe365e61ad88f628eb88cc1288a9c6b0491a85473a4"} Feb 16 21:16:48.755217 master-0 kubenswrapper[7926]: I0216 21:16:48.755040 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" podStartSLOduration=618.252740654 podStartE2EDuration="10m19.75499472s" podCreationTimestamp="2026-02-16 21:06:29 +0000 UTC" firstStartedPulling="2026-02-16 21:16:46.125573356 +0000 UTC m=+1177.760473676" lastFinishedPulling="2026-02-16 21:16:47.627827442 +0000 UTC m=+1179.262727742" observedRunningTime="2026-02-16 21:16:48.750425611 +0000 UTC m=+1180.385325991" watchObservedRunningTime="2026-02-16 21:16:48.75499472 +0000 UTC m=+1180.389895060" Feb 16 21:16:52.738692 master-0 kubenswrapper[7926]: I0216 21:16:52.738593 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:16:52.739592 master-0 kubenswrapper[7926]: E0216 21:16:52.739117 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:16:58.742169 master-0 kubenswrapper[7926]: I0216 21:16:58.742134 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:16:58.742927 master-0 kubenswrapper[7926]: E0216 21:16:58.742903 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:16:59.738740 master-0 kubenswrapper[7926]: I0216 21:16:59.738614 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:16:59.738919 master-0 kubenswrapper[7926]: E0216 21:16:59.738889 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:17:01.739809 master-0 kubenswrapper[7926]: I0216 21:17:01.739716 7926 scope.go:117] "RemoveContainer" containerID="cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba" Feb 16 21:17:02.809723 master-0 kubenswrapper[7926]: I0216 21:17:02.809638 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/6.log" Feb 16 21:17:02.809723 master-0 kubenswrapper[7926]: I0216 21:17:02.809715 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerStarted","Data":"472d6ea4b832d6dda5b947964aa6ee6e541f575109f7f54f510a3c8f6075fe63"} Feb 16 21:17:05.737997 master-0 kubenswrapper[7926]: I0216 21:17:05.737859 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:17:05.738597 master-0 kubenswrapper[7926]: E0216 21:17:05.738108 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:17:10.738954 master-0 kubenswrapper[7926]: I0216 21:17:10.738863 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:17:10.739970 master-0 kubenswrapper[7926]: E0216 21:17:10.739248 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:17:11.743881 master-0 kubenswrapper[7926]: I0216 21:17:11.743792 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:17:11.745001 master-0 kubenswrapper[7926]: E0216 21:17:11.744216 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:17:16.739282 master-0 kubenswrapper[7926]: I0216 21:17:16.739218 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:17:16.739931 master-0 kubenswrapper[7926]: E0216 21:17:16.739544 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:17:21.042497 master-0 kubenswrapper[7926]: I0216 21:17:21.042425 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:17:24.738463 master-0 kubenswrapper[7926]: I0216 21:17:24.738382 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:17:24.739334 master-0 kubenswrapper[7926]: E0216 21:17:24.738755 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:17:26.739955 master-0 kubenswrapper[7926]: I0216 21:17:26.739805 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:17:26.740541 master-0 kubenswrapper[7926]: E0216 21:17:26.740068 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:17:31.738800 master-0 kubenswrapper[7926]: I0216 21:17:31.738726 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:17:31.739479 master-0 kubenswrapper[7926]: E0216 21:17:31.739114 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:17:34.031490 master-0 kubenswrapper[7926]: I0216 21:17:34.031384 7926 generic.go:334] "Generic (PLEG): container finished" podID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerID="998a9ae2beb3b1a75e1664da2f38a4c4498101aa5035a2ceca565eb8eafef20a" exitCode=0 Feb 16 21:17:34.032281 master-0 kubenswrapper[7926]: I0216 21:17:34.031501 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerDied","Data":"998a9ae2beb3b1a75e1664da2f38a4c4498101aa5035a2ceca565eb8eafef20a"} Feb 16 21:17:34.032281 master-0 kubenswrapper[7926]: I0216 21:17:34.031576 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerStarted","Data":"2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8"} Feb 16 21:17:34.032281 master-0 kubenswrapper[7926]: I0216 21:17:34.031600 7926 scope.go:117] "RemoveContainer" containerID="f1ed58b2ccf00425ebf16fa5a6dffc055e3422108b96a5f2732ff92f9613603a" Feb 16 21:17:34.183279 master-0 kubenswrapper[7926]: I0216 21:17:34.183221 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:17:34.188385 master-0 kubenswrapper[7926]: I0216 21:17:34.188347 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:34.188385 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:34.188385 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:34.188385 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:34.188555 master-0 kubenswrapper[7926]: I0216 21:17:34.188410 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:35.183438 master-0 kubenswrapper[7926]: I0216 21:17:35.183323 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:17:35.186783 master-0 kubenswrapper[7926]: I0216 21:17:35.186705 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:35.186783 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:35.186783 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:35.186783 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:35.187070 master-0 kubenswrapper[7926]: I0216 21:17:35.186794 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:35.738753 master-0 kubenswrapper[7926]: I0216 21:17:35.738615 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:17:35.739255 master-0 kubenswrapper[7926]: E0216 21:17:35.739103 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" podUID="b1ac9776-54c4-46ce-b898-01c8cf35e593" Feb 16 21:17:36.185390 master-0 kubenswrapper[7926]: I0216 21:17:36.185316 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:36.185390 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:36.185390 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:36.185390 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:36.186158 master-0 kubenswrapper[7926]: I0216 21:17:36.185397 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:37.186166 master-0 kubenswrapper[7926]: I0216 21:17:37.186059 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:37.186166 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:37.186166 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:37.186166 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:37.187538 master-0 kubenswrapper[7926]: I0216 21:17:37.186198 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:37.738993 master-0 kubenswrapper[7926]: I0216 21:17:37.738920 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:17:37.739374 master-0 kubenswrapper[7926]: E0216 21:17:37.739335 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:17:38.188533 master-0 kubenswrapper[7926]: I0216 21:17:38.188442 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:38.188533 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:38.188533 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:38.188533 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:38.189349 master-0 kubenswrapper[7926]: I0216 21:17:38.188552 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:39.185252 master-0 kubenswrapper[7926]: I0216 21:17:39.185111 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:39.185252 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:39.185252 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:39.185252 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:39.185863 master-0 kubenswrapper[7926]: I0216 21:17:39.185280 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:40.184477 master-0 kubenswrapper[7926]: I0216 21:17:40.184382 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:40.184477 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:40.184477 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:40.184477 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:40.184477 master-0 kubenswrapper[7926]: I0216 21:17:40.184447 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:41.185840 master-0 kubenswrapper[7926]: I0216 21:17:41.185747 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:41.185840 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:41.185840 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:41.185840 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:41.186819 master-0 kubenswrapper[7926]: I0216 21:17:41.185842 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:42.187792 master-0 kubenswrapper[7926]: I0216 21:17:42.187629 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:42.187792 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:42.187792 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:42.187792 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:42.188600 master-0 kubenswrapper[7926]: I0216 21:17:42.187797 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:43.186704 master-0 kubenswrapper[7926]: I0216 21:17:43.186518 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:43.186704 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:43.186704 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:43.186704 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:43.186704 master-0 kubenswrapper[7926]: I0216 21:17:43.186639 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:44.186109 master-0 kubenswrapper[7926]: I0216 21:17:44.185994 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:44.186109 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:44.186109 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:44.186109 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:44.186109 master-0 kubenswrapper[7926]: I0216 21:17:44.186098 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:44.738243 master-0 kubenswrapper[7926]: I0216 21:17:44.738147 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:17:44.738518 master-0 kubenswrapper[7926]: E0216 21:17:44.738391 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:17:45.185113 master-0 kubenswrapper[7926]: I0216 21:17:45.185006 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:45.185113 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:45.185113 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:45.185113 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:45.185113 master-0 kubenswrapper[7926]: I0216 21:17:45.185070 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:46.186977 master-0 kubenswrapper[7926]: I0216 21:17:46.186839 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:46.186977 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:46.186977 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:46.186977 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:46.187972 master-0 kubenswrapper[7926]: I0216 21:17:46.186998 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:47.185769 master-0 kubenswrapper[7926]: I0216 21:17:47.185673 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:47.185769 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:47.185769 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:47.185769 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:47.186209 master-0 kubenswrapper[7926]: I0216 21:17:47.185768 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:48.187428 master-0 kubenswrapper[7926]: I0216 21:17:48.187359 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:48.187428 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:48.187428 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:48.187428 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:48.188537 master-0 kubenswrapper[7926]: I0216 21:17:48.187439 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:49.186356 master-0 kubenswrapper[7926]: I0216 21:17:49.186272 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:49.186356 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:49.186356 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:49.186356 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:49.186984 master-0 kubenswrapper[7926]: I0216 21:17:49.186379 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:49.739836 master-0 kubenswrapper[7926]: I0216 21:17:49.739731 7926 scope.go:117] "RemoveContainer" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" Feb 16 21:17:50.164778 master-0 kubenswrapper[7926]: I0216 21:17:50.164697 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/5.log" Feb 16 21:17:50.165006 master-0 kubenswrapper[7926]: I0216 21:17:50.164833 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerStarted","Data":"dbf27d3e8d5c7c62e35cbb6423a4806befc25edd2d78c5f0092f98b1bff2b619"} Feb 16 21:17:50.185920 master-0 kubenswrapper[7926]: I0216 21:17:50.185840 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:50.185920 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:50.185920 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:50.185920 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:50.186176 master-0 kubenswrapper[7926]: I0216 21:17:50.185934 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:51.185902 master-0 kubenswrapper[7926]: I0216 21:17:51.185787 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:51.185902 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:51.185902 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:51.185902 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:51.186605 master-0 kubenswrapper[7926]: I0216 21:17:51.185900 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:52.185089 master-0 kubenswrapper[7926]: I0216 21:17:52.184960 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:52.185089 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:52.185089 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:52.185089 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:52.185089 master-0 kubenswrapper[7926]: I0216 21:17:52.185056 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:52.739058 master-0 kubenswrapper[7926]: I0216 21:17:52.738958 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:17:52.740020 master-0 kubenswrapper[7926]: E0216 21:17:52.739412 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:17:53.186532 master-0 kubenswrapper[7926]: I0216 21:17:53.186368 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:53.186532 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:53.186532 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:53.186532 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:53.186532 master-0 kubenswrapper[7926]: I0216 21:17:53.186524 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:54.186064 master-0 kubenswrapper[7926]: I0216 21:17:54.185940 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:54.186064 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:54.186064 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:54.186064 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:54.187351 master-0 kubenswrapper[7926]: I0216 21:17:54.186238 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:55.185709 master-0 kubenswrapper[7926]: I0216 21:17:55.185594 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:55.185709 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:55.185709 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:55.185709 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:55.187438 master-0 kubenswrapper[7926]: I0216 21:17:55.187373 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:55.739205 master-0 kubenswrapper[7926]: I0216 21:17:55.739117 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:17:55.739680 master-0 kubenswrapper[7926]: E0216 21:17:55.739589 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:17:56.186566 master-0 kubenswrapper[7926]: I0216 21:17:56.186425 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:56.186566 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:56.186566 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:56.186566 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:56.187432 master-0 kubenswrapper[7926]: I0216 21:17:56.186595 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:57.185399 master-0 kubenswrapper[7926]: I0216 21:17:57.185327 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:57.185399 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:57.185399 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:57.185399 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:57.185714 master-0 kubenswrapper[7926]: I0216 21:17:57.185417 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:58.186054 master-0 kubenswrapper[7926]: I0216 21:17:58.185938 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:58.186054 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:58.186054 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:58.186054 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:58.186745 master-0 kubenswrapper[7926]: I0216 21:17:58.186126 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:17:59.185428 master-0 kubenswrapper[7926]: I0216 21:17:59.185345 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:17:59.185428 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:17:59.185428 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:17:59.185428 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:17:59.185810 master-0 kubenswrapper[7926]: I0216 21:17:59.185439 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:00.185878 master-0 kubenswrapper[7926]: I0216 21:18:00.185787 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:00.185878 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:00.185878 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:00.185878 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:00.185878 master-0 kubenswrapper[7926]: I0216 21:18:00.185853 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:01.185550 master-0 kubenswrapper[7926]: I0216 21:18:01.185463 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:01.185550 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:01.185550 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:01.185550 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:01.186963 master-0 kubenswrapper[7926]: I0216 21:18:01.185555 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:02.185689 master-0 kubenswrapper[7926]: I0216 21:18:02.185563 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:02.185689 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:02.185689 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:02.185689 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:02.186302 master-0 kubenswrapper[7926]: I0216 21:18:02.185709 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:03.185570 master-0 kubenswrapper[7926]: I0216 21:18:03.185496 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:03.185570 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:03.185570 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:03.185570 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:03.185570 master-0 kubenswrapper[7926]: I0216 21:18:03.185562 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:04.185996 master-0 kubenswrapper[7926]: I0216 21:18:04.185919 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:04.185996 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:04.185996 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:04.185996 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:04.186937 master-0 kubenswrapper[7926]: I0216 21:18:04.186078 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:05.185143 master-0 kubenswrapper[7926]: I0216 21:18:05.185035 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:05.185143 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:05.185143 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:05.185143 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:05.185143 master-0 kubenswrapper[7926]: I0216 21:18:05.185143 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:06.186531 master-0 kubenswrapper[7926]: I0216 21:18:06.186437 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:06.186531 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:06.186531 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:06.186531 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:06.186531 master-0 kubenswrapper[7926]: I0216 21:18:06.186534 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:06.738512 master-0 kubenswrapper[7926]: I0216 21:18:06.738400 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:18:06.739158 master-0 kubenswrapper[7926]: E0216 21:18:06.738813 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:18:07.186105 master-0 kubenswrapper[7926]: I0216 21:18:07.186027 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:07.186105 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:07.186105 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:07.186105 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:07.186459 master-0 kubenswrapper[7926]: I0216 21:18:07.186129 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:08.186042 master-0 kubenswrapper[7926]: I0216 21:18:08.185917 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:08.186042 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:08.186042 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:08.186042 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:08.187367 master-0 kubenswrapper[7926]: I0216 21:18:08.186105 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:09.185360 master-0 kubenswrapper[7926]: I0216 21:18:09.185299 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:09.185360 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:09.185360 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:09.185360 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:09.185684 master-0 kubenswrapper[7926]: I0216 21:18:09.185373 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:09.738818 master-0 kubenswrapper[7926]: I0216 21:18:09.738742 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:18:10.185255 master-0 kubenswrapper[7926]: I0216 21:18:10.185199 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:10.185255 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:10.185255 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:10.185255 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:10.185547 master-0 kubenswrapper[7926]: I0216 21:18:10.185265 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:10.310376 master-0 kubenswrapper[7926]: I0216 21:18:10.310339 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/5.log" Feb 16 21:18:10.311096 master-0 kubenswrapper[7926]: I0216 21:18:10.311055 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f"} Feb 16 21:18:11.184953 master-0 kubenswrapper[7926]: I0216 21:18:11.184866 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:11.184953 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:11.184953 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:11.184953 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:11.184953 master-0 kubenswrapper[7926]: I0216 21:18:11.184937 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:12.186290 master-0 kubenswrapper[7926]: I0216 21:18:12.186220 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:12.186290 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:12.186290 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:12.186290 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:12.187035 master-0 kubenswrapper[7926]: I0216 21:18:12.186950 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:13.187096 master-0 kubenswrapper[7926]: I0216 21:18:13.186882 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:13.187096 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:13.187096 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:13.187096 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:13.187857 master-0 kubenswrapper[7926]: I0216 21:18:13.187150 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:14.185438 master-0 kubenswrapper[7926]: I0216 21:18:14.185338 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:14.185438 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:14.185438 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:14.185438 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:14.185438 master-0 kubenswrapper[7926]: I0216 21:18:14.185421 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:15.184305 master-0 kubenswrapper[7926]: I0216 21:18:15.184236 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:15.184305 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:15.184305 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:15.184305 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:15.184305 master-0 kubenswrapper[7926]: I0216 21:18:15.184298 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:16.185707 master-0 kubenswrapper[7926]: I0216 21:18:16.185283 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:16.185707 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:16.185707 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:16.185707 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:16.185707 master-0 kubenswrapper[7926]: I0216 21:18:16.185353 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:17.186337 master-0 kubenswrapper[7926]: I0216 21:18:17.186240 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:17.186337 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:17.186337 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:17.186337 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:17.187390 master-0 kubenswrapper[7926]: I0216 21:18:17.186349 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:18.185541 master-0 kubenswrapper[7926]: I0216 21:18:18.185423 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:18.185541 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:18.185541 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:18.185541 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:18.186352 master-0 kubenswrapper[7926]: I0216 21:18:18.185554 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:19.186214 master-0 kubenswrapper[7926]: I0216 21:18:19.186079 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:19.186214 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:19.186214 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:19.186214 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:19.187584 master-0 kubenswrapper[7926]: I0216 21:18:19.186210 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:20.185572 master-0 kubenswrapper[7926]: I0216 21:18:20.185466 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:20.185572 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:20.185572 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:20.185572 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:20.186057 master-0 kubenswrapper[7926]: I0216 21:18:20.185619 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:20.739948 master-0 kubenswrapper[7926]: I0216 21:18:20.739831 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:18:20.740800 master-0 kubenswrapper[7926]: E0216 21:18:20.740366 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:18:21.185193 master-0 kubenswrapper[7926]: I0216 21:18:21.185126 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:21.185193 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:21.185193 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:21.185193 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:21.185530 master-0 kubenswrapper[7926]: I0216 21:18:21.185210 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:22.185504 master-0 kubenswrapper[7926]: I0216 21:18:22.185437 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:22.185504 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:22.185504 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:22.185504 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:22.186070 master-0 kubenswrapper[7926]: I0216 21:18:22.185524 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:23.184678 master-0 kubenswrapper[7926]: I0216 21:18:23.184578 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:23.184678 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:23.184678 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:23.184678 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:23.184989 master-0 kubenswrapper[7926]: I0216 21:18:23.184708 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:24.185743 master-0 kubenswrapper[7926]: I0216 21:18:24.185679 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:24.185743 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:24.185743 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:24.185743 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:24.186967 master-0 kubenswrapper[7926]: I0216 21:18:24.185759 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:25.186116 master-0 kubenswrapper[7926]: I0216 21:18:25.186017 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:25.186116 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:25.186116 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:25.186116 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:25.186116 master-0 kubenswrapper[7926]: I0216 21:18:25.186120 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:26.185464 master-0 kubenswrapper[7926]: I0216 21:18:26.185363 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:26.185464 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:26.185464 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:26.185464 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:26.185464 master-0 kubenswrapper[7926]: I0216 21:18:26.185446 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:27.185447 master-0 kubenswrapper[7926]: I0216 21:18:27.185379 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:27.185447 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:27.185447 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:27.185447 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:27.185447 master-0 kubenswrapper[7926]: I0216 21:18:27.185444 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:28.185401 master-0 kubenswrapper[7926]: I0216 21:18:28.185295 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:28.185401 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:28.185401 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:28.185401 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:28.185996 master-0 kubenswrapper[7926]: I0216 21:18:28.185584 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:29.184692 master-0 kubenswrapper[7926]: I0216 21:18:29.184580 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:29.184692 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:29.184692 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:29.184692 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:29.184968 master-0 kubenswrapper[7926]: I0216 21:18:29.184725 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:30.184231 master-0 kubenswrapper[7926]: I0216 21:18:30.184173 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:30.184231 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:30.184231 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:30.184231 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:30.185117 master-0 kubenswrapper[7926]: I0216 21:18:30.184255 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:31.186590 master-0 kubenswrapper[7926]: I0216 21:18:31.186519 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:31.186590 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:31.186590 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:31.186590 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:31.186590 master-0 kubenswrapper[7926]: I0216 21:18:31.186594 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:32.184080 master-0 kubenswrapper[7926]: I0216 21:18:32.184020 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:32.184080 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:32.184080 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:32.184080 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:32.184384 master-0 kubenswrapper[7926]: I0216 21:18:32.184093 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:33.186216 master-0 kubenswrapper[7926]: I0216 21:18:33.186138 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:33.186216 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:33.186216 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:33.186216 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:33.187354 master-0 kubenswrapper[7926]: I0216 21:18:33.186246 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:34.185460 master-0 kubenswrapper[7926]: I0216 21:18:34.185307 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:34.185460 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:34.185460 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:34.185460 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:34.186045 master-0 kubenswrapper[7926]: I0216 21:18:34.185483 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:34.738687 master-0 kubenswrapper[7926]: I0216 21:18:34.738588 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:18:34.739443 master-0 kubenswrapper[7926]: E0216 21:18:34.738824 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:18:35.185059 master-0 kubenswrapper[7926]: I0216 21:18:35.184998 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:35.185059 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:35.185059 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:35.185059 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:35.185059 master-0 kubenswrapper[7926]: I0216 21:18:35.185057 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:36.185226 master-0 kubenswrapper[7926]: I0216 21:18:36.185172 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:36.185226 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:36.185226 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:36.185226 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:36.186052 master-0 kubenswrapper[7926]: I0216 21:18:36.185252 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:37.184968 master-0 kubenswrapper[7926]: I0216 21:18:37.184899 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:37.184968 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:37.184968 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:37.184968 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:37.184968 master-0 kubenswrapper[7926]: I0216 21:18:37.184963 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:38.186205 master-0 kubenswrapper[7926]: I0216 21:18:38.186103 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:38.186205 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:38.186205 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:38.186205 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:38.187329 master-0 kubenswrapper[7926]: I0216 21:18:38.186260 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:39.186264 master-0 kubenswrapper[7926]: I0216 21:18:39.186148 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:39.186264 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:39.186264 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:39.186264 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:39.187363 master-0 kubenswrapper[7926]: I0216 21:18:39.186273 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:40.185560 master-0 kubenswrapper[7926]: I0216 21:18:40.185468 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:40.185560 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:40.185560 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:40.185560 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:40.185560 master-0 kubenswrapper[7926]: I0216 21:18:40.185541 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:41.186122 master-0 kubenswrapper[7926]: I0216 21:18:41.186048 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:41.186122 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:41.186122 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:41.186122 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:41.186856 master-0 kubenswrapper[7926]: I0216 21:18:41.186151 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:42.186689 master-0 kubenswrapper[7926]: I0216 21:18:42.186524 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:42.186689 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:42.186689 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:42.186689 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:42.188058 master-0 kubenswrapper[7926]: I0216 21:18:42.186720 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:43.186178 master-0 kubenswrapper[7926]: I0216 21:18:43.186067 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:43.186178 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:43.186178 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:43.186178 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:43.186714 master-0 kubenswrapper[7926]: I0216 21:18:43.186210 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:44.185186 master-0 kubenswrapper[7926]: I0216 21:18:44.185092 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:44.185186 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:44.185186 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:44.185186 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:44.186208 master-0 kubenswrapper[7926]: I0216 21:18:44.185194 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:45.185332 master-0 kubenswrapper[7926]: I0216 21:18:45.185244 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:45.185332 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:45.185332 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:45.185332 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:45.186014 master-0 kubenswrapper[7926]: I0216 21:18:45.185344 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:46.185934 master-0 kubenswrapper[7926]: I0216 21:18:46.185864 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:46.185934 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:46.185934 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:46.185934 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:46.186472 master-0 kubenswrapper[7926]: I0216 21:18:46.185950 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:47.184754 master-0 kubenswrapper[7926]: I0216 21:18:47.184630 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:47.184754 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:47.184754 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:47.184754 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:47.184754 master-0 kubenswrapper[7926]: I0216 21:18:47.184743 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:48.185544 master-0 kubenswrapper[7926]: I0216 21:18:48.185411 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:48.185544 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:48.185544 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:48.185544 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:48.186797 master-0 kubenswrapper[7926]: I0216 21:18:48.185576 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:49.185971 master-0 kubenswrapper[7926]: I0216 21:18:49.185831 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:49.185971 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:49.185971 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:49.185971 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:49.187077 master-0 kubenswrapper[7926]: I0216 21:18:49.185979 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:49.738173 master-0 kubenswrapper[7926]: I0216 21:18:49.738107 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:18:49.738448 master-0 kubenswrapper[7926]: E0216 21:18:49.738398 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:18:50.185644 master-0 kubenswrapper[7926]: I0216 21:18:50.185560 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:50.185644 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:50.185644 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:50.185644 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:50.185644 master-0 kubenswrapper[7926]: I0216 21:18:50.185680 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:51.185772 master-0 kubenswrapper[7926]: I0216 21:18:51.185707 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:51.185772 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:51.185772 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:51.185772 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:51.186435 master-0 kubenswrapper[7926]: I0216 21:18:51.185794 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:52.185070 master-0 kubenswrapper[7926]: I0216 21:18:52.185009 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:52.185070 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:52.185070 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:52.185070 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:52.185535 master-0 kubenswrapper[7926]: I0216 21:18:52.185072 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:53.186032 master-0 kubenswrapper[7926]: I0216 21:18:53.185931 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:53.186032 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:53.186032 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:53.186032 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:53.187040 master-0 kubenswrapper[7926]: I0216 21:18:53.186046 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:54.185779 master-0 kubenswrapper[7926]: I0216 21:18:54.185602 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:54.185779 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:54.185779 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:54.185779 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:54.186978 master-0 kubenswrapper[7926]: I0216 21:18:54.185827 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:55.185125 master-0 kubenswrapper[7926]: I0216 21:18:55.185039 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:55.185125 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:55.185125 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:55.185125 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:55.185125 master-0 kubenswrapper[7926]: I0216 21:18:55.185110 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:56.185138 master-0 kubenswrapper[7926]: I0216 21:18:56.185066 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:56.185138 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:56.185138 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:56.185138 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:56.185138 master-0 kubenswrapper[7926]: I0216 21:18:56.185141 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:57.185738 master-0 kubenswrapper[7926]: I0216 21:18:57.185688 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:57.185738 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:57.185738 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:57.185738 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:57.186260 master-0 kubenswrapper[7926]: I0216 21:18:57.185751 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:58.185225 master-0 kubenswrapper[7926]: I0216 21:18:58.185113 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:58.185225 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:58.185225 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:58.185225 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:58.185225 master-0 kubenswrapper[7926]: I0216 21:18:58.185218 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:18:59.184678 master-0 kubenswrapper[7926]: I0216 21:18:59.184578 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:18:59.184678 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:18:59.184678 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:18:59.184678 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:18:59.184678 master-0 kubenswrapper[7926]: I0216 21:18:59.184668 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:00.186019 master-0 kubenswrapper[7926]: I0216 21:19:00.185907 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:00.186019 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:00.186019 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:00.186019 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:00.186019 master-0 kubenswrapper[7926]: I0216 21:19:00.186003 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:01.185407 master-0 kubenswrapper[7926]: I0216 21:19:01.185292 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:01.185407 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:01.185407 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:01.185407 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:01.185407 master-0 kubenswrapper[7926]: I0216 21:19:01.185369 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:02.186049 master-0 kubenswrapper[7926]: I0216 21:19:02.185958 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:02.186049 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:02.186049 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:02.186049 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:02.186049 master-0 kubenswrapper[7926]: I0216 21:19:02.186048 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:03.187172 master-0 kubenswrapper[7926]: I0216 21:19:03.187064 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:03.187172 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:03.187172 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:03.187172 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:03.188266 master-0 kubenswrapper[7926]: I0216 21:19:03.187217 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:04.186506 master-0 kubenswrapper[7926]: I0216 21:19:04.186385 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:04.186506 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:04.186506 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:04.186506 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:04.187201 master-0 kubenswrapper[7926]: I0216 21:19:04.186532 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:04.742765 master-0 kubenswrapper[7926]: I0216 21:19:04.742659 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:19:04.744235 master-0 kubenswrapper[7926]: E0216 21:19:04.744182 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:19:05.184749 master-0 kubenswrapper[7926]: I0216 21:19:05.184539 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:05.184749 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:05.184749 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:05.184749 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:05.184749 master-0 kubenswrapper[7926]: I0216 21:19:05.184674 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:06.186371 master-0 kubenswrapper[7926]: I0216 21:19:06.186288 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:06.186371 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:06.186371 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:06.186371 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:06.186371 master-0 kubenswrapper[7926]: I0216 21:19:06.186359 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:07.185927 master-0 kubenswrapper[7926]: I0216 21:19:07.185794 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:07.185927 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:07.185927 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:07.185927 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:07.187132 master-0 kubenswrapper[7926]: I0216 21:19:07.185953 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:08.185170 master-0 kubenswrapper[7926]: I0216 21:19:08.185075 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:08.185170 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:08.185170 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:08.185170 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:08.185170 master-0 kubenswrapper[7926]: I0216 21:19:08.185152 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:09.185136 master-0 kubenswrapper[7926]: I0216 21:19:09.185021 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:09.185136 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:09.185136 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:09.185136 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:09.185136 master-0 kubenswrapper[7926]: I0216 21:19:09.185113 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:10.184972 master-0 kubenswrapper[7926]: I0216 21:19:10.184907 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:10.184972 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:10.184972 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:10.184972 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:10.184972 master-0 kubenswrapper[7926]: I0216 21:19:10.184966 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:11.184777 master-0 kubenswrapper[7926]: I0216 21:19:11.184613 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:11.184777 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:11.184777 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:11.184777 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:11.185228 master-0 kubenswrapper[7926]: I0216 21:19:11.184823 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:12.185727 master-0 kubenswrapper[7926]: I0216 21:19:12.185679 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:12.185727 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:12.185727 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:12.185727 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:12.186303 master-0 kubenswrapper[7926]: I0216 21:19:12.185745 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:13.184967 master-0 kubenswrapper[7926]: I0216 21:19:13.184866 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:13.184967 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:13.184967 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:13.184967 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:13.184967 master-0 kubenswrapper[7926]: I0216 21:19:13.184953 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:14.185724 master-0 kubenswrapper[7926]: I0216 21:19:14.185506 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:14.185724 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:14.185724 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:14.185724 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:14.187246 master-0 kubenswrapper[7926]: I0216 21:19:14.185642 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:15.185437 master-0 kubenswrapper[7926]: I0216 21:19:15.185363 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:15.185437 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:15.185437 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:15.185437 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:15.185437 master-0 kubenswrapper[7926]: I0216 21:19:15.185427 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:15.738623 master-0 kubenswrapper[7926]: I0216 21:19:15.738558 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:19:15.739184 master-0 kubenswrapper[7926]: E0216 21:19:15.739120 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:19:16.186133 master-0 kubenswrapper[7926]: I0216 21:19:16.186065 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:16.186133 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:16.186133 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:16.186133 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:16.187389 master-0 kubenswrapper[7926]: I0216 21:19:16.187268 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:17.184712 master-0 kubenswrapper[7926]: I0216 21:19:17.184673 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:17.184712 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:17.184712 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:17.184712 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:17.185177 master-0 kubenswrapper[7926]: I0216 21:19:17.185152 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:18.186089 master-0 kubenswrapper[7926]: I0216 21:19:18.186008 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:18.186089 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:18.186089 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:18.186089 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:18.187227 master-0 kubenswrapper[7926]: I0216 21:19:18.186094 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:19.184613 master-0 kubenswrapper[7926]: I0216 21:19:19.184554 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:19.184613 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:19.184613 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:19.184613 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:19.184980 master-0 kubenswrapper[7926]: I0216 21:19:19.184627 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:20.185008 master-0 kubenswrapper[7926]: I0216 21:19:20.184719 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:20.185008 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:20.185008 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:20.185008 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:20.185748 master-0 kubenswrapper[7926]: I0216 21:19:20.185019 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:21.185986 master-0 kubenswrapper[7926]: I0216 21:19:21.185919 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:21.185986 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:21.185986 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:21.185986 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:21.187185 master-0 kubenswrapper[7926]: I0216 21:19:21.186887 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:22.185046 master-0 kubenswrapper[7926]: I0216 21:19:22.184957 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:22.185046 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:22.185046 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:22.185046 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:22.185509 master-0 kubenswrapper[7926]: I0216 21:19:22.185060 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:23.185899 master-0 kubenswrapper[7926]: I0216 21:19:23.185848 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:23.185899 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:23.185899 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:23.185899 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:23.186861 master-0 kubenswrapper[7926]: I0216 21:19:23.185936 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:24.186307 master-0 kubenswrapper[7926]: I0216 21:19:24.186220 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:24.186307 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:24.186307 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:24.186307 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:24.186307 master-0 kubenswrapper[7926]: I0216 21:19:24.186290 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:25.184892 master-0 kubenswrapper[7926]: I0216 21:19:25.184815 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:25.184892 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:25.184892 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:25.184892 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:25.185536 master-0 kubenswrapper[7926]: I0216 21:19:25.184906 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:26.185424 master-0 kubenswrapper[7926]: I0216 21:19:26.185342 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:26.185424 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:26.185424 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:26.185424 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:26.186505 master-0 kubenswrapper[7926]: I0216 21:19:26.185429 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:27.185111 master-0 kubenswrapper[7926]: I0216 21:19:27.185046 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:27.185111 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:27.185111 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:27.185111 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:27.185382 master-0 kubenswrapper[7926]: I0216 21:19:27.185150 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:28.185199 master-0 kubenswrapper[7926]: I0216 21:19:28.185122 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:28.185199 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:28.185199 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:28.185199 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:28.185199 master-0 kubenswrapper[7926]: I0216 21:19:28.185188 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:28.743536 master-0 kubenswrapper[7926]: I0216 21:19:28.743467 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:19:28.744092 master-0 kubenswrapper[7926]: E0216 21:19:28.744016 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 21:19:29.185623 master-0 kubenswrapper[7926]: I0216 21:19:29.185558 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:29.185623 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:29.185623 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:29.185623 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:29.185623 master-0 kubenswrapper[7926]: I0216 21:19:29.185633 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:30.184514 master-0 kubenswrapper[7926]: I0216 21:19:30.184442 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:30.184514 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:30.184514 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:30.184514 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:30.184514 master-0 kubenswrapper[7926]: I0216 21:19:30.184512 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:31.185517 master-0 kubenswrapper[7926]: I0216 21:19:31.185454 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:31.185517 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:31.185517 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:31.185517 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:31.186174 master-0 kubenswrapper[7926]: I0216 21:19:31.185557 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:32.185033 master-0 kubenswrapper[7926]: I0216 21:19:32.184926 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:32.185033 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:32.185033 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:32.185033 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:32.185361 master-0 kubenswrapper[7926]: I0216 21:19:32.185066 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:33.185079 master-0 kubenswrapper[7926]: I0216 21:19:33.185016 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:19:33.185079 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:19:33.185079 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:19:33.185079 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:19:33.185709 master-0 kubenswrapper[7926]: I0216 21:19:33.185097 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:19:33.185709 master-0 kubenswrapper[7926]: I0216 21:19:33.185150 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:19:33.185777 master-0 kubenswrapper[7926]: I0216 21:19:33.185753 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8"} pod="openshift-ingress/router-default-864ddd5f56-z4bnk" containerMessage="Container router failed startup probe, will be restarted" Feb 16 21:19:33.185810 master-0 kubenswrapper[7926]: I0216 21:19:33.185792 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" containerID="cri-o://2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8" gracePeriod=3600 Feb 16 21:19:40.738631 master-0 kubenswrapper[7926]: I0216 21:19:40.738559 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:19:41.954644 master-0 kubenswrapper[7926]: I0216 21:19:41.954555 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"6ae1597534c852a1aae5585dadba4c16b6d817d6984c35ca98940b0dfe1fcd77"} Feb 16 21:19:45.979982 master-0 kubenswrapper[7926]: I0216 21:19:45.979897 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:19:46.710381 master-0 kubenswrapper[7926]: I0216 21:19:46.710277 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:19:46.716061 master-0 kubenswrapper[7926]: I0216 21:19:46.716020 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:19:55.986859 master-0 kubenswrapper[7926]: I0216 21:19:55.986795 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:19:57.823843 master-0 kubenswrapper[7926]: I0216 21:19:57.823740 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b"] Feb 16 21:19:57.824436 master-0 kubenswrapper[7926]: E0216 21:19:57.824133 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf5e26c-84a2-45c6-b7dc-ee96dad23175" containerName="installer" Feb 16 21:19:57.824436 master-0 kubenswrapper[7926]: I0216 21:19:57.824157 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf5e26c-84a2-45c6-b7dc-ee96dad23175" containerName="installer" Feb 16 21:19:57.824436 master-0 kubenswrapper[7926]: I0216 21:19:57.824387 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf5e26c-84a2-45c6-b7dc-ee96dad23175" containerName="installer" Feb 16 21:19:57.825095 master-0 kubenswrapper[7926]: I0216 21:19:57.825060 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:19:57.826935 master-0 kubenswrapper[7926]: I0216 21:19:57.826881 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:19:57.827731 master-0 kubenswrapper[7926]: I0216 21:19:57.827700 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-r6wp5" Feb 16 21:19:57.841169 master-0 kubenswrapper[7926]: I0216 21:19:57.841109 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b"] Feb 16 21:19:57.912793 master-0 kubenswrapper[7926]: I0216 21:19:57.910837 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh2rn\" (UniqueName: \"kubernetes.io/projected/ebeb6876-0438-4961-a62a-68b41a676f17-kube-api-access-kh2rn\") pod \"collect-profiles-29521275-fl78b\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:19:57.912793 master-0 kubenswrapper[7926]: I0216 21:19:57.910965 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebeb6876-0438-4961-a62a-68b41a676f17-config-volume\") pod \"collect-profiles-29521275-fl78b\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:19:57.912793 master-0 kubenswrapper[7926]: I0216 21:19:57.911026 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebeb6876-0438-4961-a62a-68b41a676f17-secret-volume\") pod \"collect-profiles-29521275-fl78b\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:19:58.013333 master-0 kubenswrapper[7926]: I0216 21:19:58.013236 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh2rn\" (UniqueName: \"kubernetes.io/projected/ebeb6876-0438-4961-a62a-68b41a676f17-kube-api-access-kh2rn\") pod \"collect-profiles-29521275-fl78b\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:19:58.013631 master-0 kubenswrapper[7926]: I0216 21:19:58.013362 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebeb6876-0438-4961-a62a-68b41a676f17-config-volume\") pod \"collect-profiles-29521275-fl78b\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:19:58.013631 master-0 kubenswrapper[7926]: I0216 21:19:58.013411 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebeb6876-0438-4961-a62a-68b41a676f17-secret-volume\") pod \"collect-profiles-29521275-fl78b\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:19:58.014904 master-0 kubenswrapper[7926]: I0216 21:19:58.014849 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebeb6876-0438-4961-a62a-68b41a676f17-config-volume\") pod \"collect-profiles-29521275-fl78b\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:19:58.017570 master-0 kubenswrapper[7926]: I0216 21:19:58.017504 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebeb6876-0438-4961-a62a-68b41a676f17-secret-volume\") pod \"collect-profiles-29521275-fl78b\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:19:58.031091 master-0 kubenswrapper[7926]: I0216 21:19:58.030997 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh2rn\" (UniqueName: \"kubernetes.io/projected/ebeb6876-0438-4961-a62a-68b41a676f17-kube-api-access-kh2rn\") pod \"collect-profiles-29521275-fl78b\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:19:58.146304 master-0 kubenswrapper[7926]: I0216 21:19:58.146159 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:19:58.562790 master-0 kubenswrapper[7926]: I0216 21:19:58.562715 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b"] Feb 16 21:19:58.570736 master-0 kubenswrapper[7926]: W0216 21:19:58.570684 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebeb6876_0438_4961_a62a_68b41a676f17.slice/crio-83b0c5c9f9e9a6aa803d0e80eca18b14e4ab78d1317a06af8dc1b57da3bbd755 WatchSource:0}: Error finding container 83b0c5c9f9e9a6aa803d0e80eca18b14e4ab78d1317a06af8dc1b57da3bbd755: Status 404 returned error can't find the container with id 83b0c5c9f9e9a6aa803d0e80eca18b14e4ab78d1317a06af8dc1b57da3bbd755 Feb 16 21:19:59.064367 master-0 kubenswrapper[7926]: I0216 21:19:59.064295 7926 generic.go:334] "Generic (PLEG): container finished" podID="ebeb6876-0438-4961-a62a-68b41a676f17" containerID="ba4091698915c4aa641aec2c8b4b82e0a58aec68f9f33e7955121f8e822a443d" exitCode=0 Feb 16 21:19:59.064367 master-0 kubenswrapper[7926]: I0216 21:19:59.064370 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" event={"ID":"ebeb6876-0438-4961-a62a-68b41a676f17","Type":"ContainerDied","Data":"ba4091698915c4aa641aec2c8b4b82e0a58aec68f9f33e7955121f8e822a443d"} Feb 16 21:19:59.065057 master-0 kubenswrapper[7926]: I0216 21:19:59.064408 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" event={"ID":"ebeb6876-0438-4961-a62a-68b41a676f17","Type":"ContainerStarted","Data":"83b0c5c9f9e9a6aa803d0e80eca18b14e4ab78d1317a06af8dc1b57da3bbd755"} Feb 16 21:19:59.858182 master-0 kubenswrapper[7926]: I0216 21:19:59.858109 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z"] Feb 16 21:19:59.859741 master-0 kubenswrapper[7926]: I0216 21:19:59.859710 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:19:59.861928 master-0 kubenswrapper[7926]: I0216 21:19:59.861857 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 21:19:59.862763 master-0 kubenswrapper[7926]: I0216 21:19:59.862702 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-ctvb2"] Feb 16 21:19:59.864097 master-0 kubenswrapper[7926]: I0216 21:19:59.864060 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 21:19:59.864281 master-0 kubenswrapper[7926]: I0216 21:19:59.864250 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:19:59.865417 master-0 kubenswrapper[7926]: I0216 21:19:59.865386 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-2mlkm" Feb 16 21:19:59.866265 master-0 kubenswrapper[7926]: I0216 21:19:59.866207 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 21:19:59.866468 master-0 kubenswrapper[7926]: I0216 21:19:59.866439 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-lbttq" Feb 16 21:19:59.866525 master-0 kubenswrapper[7926]: I0216 21:19:59.866486 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 21:19:59.881074 master-0 kubenswrapper[7926]: I0216 21:19:59.881028 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-n467n"] Feb 16 21:19:59.882350 master-0 kubenswrapper[7926]: I0216 21:19:59.882320 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:19:59.883967 master-0 kubenswrapper[7926]: I0216 21:19:59.883910 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-5tbmx" Feb 16 21:19:59.885743 master-0 kubenswrapper[7926]: I0216 21:19:59.885699 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 21:19:59.890497 master-0 kubenswrapper[7926]: I0216 21:19:59.890458 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z"] Feb 16 21:19:59.891324 master-0 kubenswrapper[7926]: I0216 21:19:59.891285 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 21:19:59.891573 master-0 kubenswrapper[7926]: I0216 21:19:59.891550 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 21:19:59.921666 master-0 kubenswrapper[7926]: I0216 21:19:59.917898 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-n467n"] Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942092 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbfdg\" (UniqueName: \"kubernetes.io/projected/f7b30888-5994-4968-9db6-9533ac60c92e-kube-api-access-fbfdg\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942153 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-wtmp\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942174 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-metrics-client-ca\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942200 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942221 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942247 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942266 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7b30888-5994-4968-9db6-9533ac60c92e-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942292 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942307 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-textfile\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942330 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-sys\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942350 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942376 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jh6l\" (UniqueName: \"kubernetes.io/projected/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-kube-api-access-6jh6l\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942396 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942413 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942430 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942446 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-root\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942462 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/e9bd1f48-6d45-4045-b18e-46ce3005d51d-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:19:59.942665 master-0 kubenswrapper[7926]: I0216 21:19:59.942481 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wckst\" (UniqueName: \"kubernetes.io/projected/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-api-access-wckst\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.043362 master-0 kubenswrapper[7926]: I0216 21:20:00.043308 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wckst\" (UniqueName: \"kubernetes.io/projected/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-api-access-wckst\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.043362 master-0 kubenswrapper[7926]: I0216 21:20:00.043365 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbfdg\" (UniqueName: \"kubernetes.io/projected/f7b30888-5994-4968-9db6-9533ac60c92e-kube-api-access-fbfdg\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:20:00.043638 master-0 kubenswrapper[7926]: I0216 21:20:00.043386 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-wtmp\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.043765 master-0 kubenswrapper[7926]: I0216 21:20:00.043686 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-metrics-client-ca\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.043838 master-0 kubenswrapper[7926]: I0216 21:20:00.043815 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-wtmp\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.043914 master-0 kubenswrapper[7926]: I0216 21:20:00.043877 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.043959 master-0 kubenswrapper[7926]: I0216 21:20:00.043935 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.044072 master-0 kubenswrapper[7926]: I0216 21:20:00.044040 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.044110 master-0 kubenswrapper[7926]: I0216 21:20:00.044082 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7b30888-5994-4968-9db6-9533ac60c92e-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:20:00.044193 master-0 kubenswrapper[7926]: E0216 21:20:00.044157 7926 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Feb 16 21:20:00.044251 master-0 kubenswrapper[7926]: E0216 21:20:00.044230 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls podName:7d6eb694-9a3d-49d1-bbc1-74ba4450d673 nodeName:}" failed. No retries permitted until 2026-02-16 21:20:00.544212605 +0000 UTC m=+1372.179112905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls") pod "node-exporter-ctvb2" (UID: "7d6eb694-9a3d-49d1-bbc1-74ba4450d673") : secret "node-exporter-tls" not found Feb 16 21:20:00.044419 master-0 kubenswrapper[7926]: I0216 21:20:00.044392 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:20:00.044458 master-0 kubenswrapper[7926]: I0216 21:20:00.044434 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-textfile\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.044493 master-0 kubenswrapper[7926]: I0216 21:20:00.044475 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-sys\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.044540 master-0 kubenswrapper[7926]: E0216 21:20:00.044510 7926 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Feb 16 21:20:00.044594 master-0 kubenswrapper[7926]: E0216 21:20:00.044572 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls podName:f7b30888-5994-4968-9db6-9533ac60c92e nodeName:}" failed. No retries permitted until 2026-02-16 21:20:00.544555923 +0000 UTC m=+1372.179456433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-s4j9z" (UID: "f7b30888-5994-4968-9db6-9533ac60c92e") : secret "openshift-state-metrics-tls" not found Feb 16 21:20:00.044660 master-0 kubenswrapper[7926]: I0216 21:20:00.044619 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.044705 master-0 kubenswrapper[7926]: I0216 21:20:00.044660 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-sys\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.044740 master-0 kubenswrapper[7926]: I0216 21:20:00.044702 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jh6l\" (UniqueName: \"kubernetes.io/projected/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-kube-api-access-6jh6l\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.044777 master-0 kubenswrapper[7926]: I0216 21:20:00.044730 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-metrics-client-ca\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.044777 master-0 kubenswrapper[7926]: I0216 21:20:00.044740 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.044844 master-0 kubenswrapper[7926]: I0216 21:20:00.044821 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.044877 master-0 kubenswrapper[7926]: I0216 21:20:00.044858 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:20:00.044910 master-0 kubenswrapper[7926]: I0216 21:20:00.044882 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-root\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.044910 master-0 kubenswrapper[7926]: I0216 21:20:00.044904 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/e9bd1f48-6d45-4045-b18e-46ce3005d51d-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.045233 master-0 kubenswrapper[7926]: E0216 21:20:00.044986 7926 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 16 21:20:00.045233 master-0 kubenswrapper[7926]: E0216 21:20:00.045023 7926 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls podName:e9bd1f48-6d45-4045-b18e-46ce3005d51d nodeName:}" failed. No retries permitted until 2026-02-16 21:20:00.545013536 +0000 UTC m=+1372.179913836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-n467n" (UID: "e9bd1f48-6d45-4045-b18e-46ce3005d51d") : secret "kube-state-metrics-tls" not found Feb 16 21:20:00.045233 master-0 kubenswrapper[7926]: I0216 21:20:00.045074 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-root\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.045358 master-0 kubenswrapper[7926]: I0216 21:20:00.045246 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-textfile\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.045358 master-0 kubenswrapper[7926]: I0216 21:20:00.045294 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.047578 master-0 kubenswrapper[7926]: I0216 21:20:00.045472 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7b30888-5994-4968-9db6-9533ac60c92e-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:20:00.047578 master-0 kubenswrapper[7926]: I0216 21:20:00.045633 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.047578 master-0 kubenswrapper[7926]: I0216 21:20:00.045637 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/e9bd1f48-6d45-4045-b18e-46ce3005d51d-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.048747 master-0 kubenswrapper[7926]: I0216 21:20:00.048123 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:20:00.049739 master-0 kubenswrapper[7926]: I0216 21:20:00.048994 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.050395 master-0 kubenswrapper[7926]: I0216 21:20:00.050340 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.064334 master-0 kubenswrapper[7926]: I0216 21:20:00.064258 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jh6l\" (UniqueName: \"kubernetes.io/projected/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-kube-api-access-6jh6l\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.069276 master-0 kubenswrapper[7926]: I0216 21:20:00.068392 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wckst\" (UniqueName: \"kubernetes.io/projected/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-api-access-wckst\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.069276 master-0 kubenswrapper[7926]: I0216 21:20:00.068798 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbfdg\" (UniqueName: \"kubernetes.io/projected/f7b30888-5994-4968-9db6-9533ac60c92e-kube-api-access-fbfdg\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:20:00.381980 master-0 kubenswrapper[7926]: I0216 21:20:00.381928 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:20:00.449987 master-0 kubenswrapper[7926]: I0216 21:20:00.449917 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh2rn\" (UniqueName: \"kubernetes.io/projected/ebeb6876-0438-4961-a62a-68b41a676f17-kube-api-access-kh2rn\") pod \"ebeb6876-0438-4961-a62a-68b41a676f17\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " Feb 16 21:20:00.450179 master-0 kubenswrapper[7926]: I0216 21:20:00.450032 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebeb6876-0438-4961-a62a-68b41a676f17-secret-volume\") pod \"ebeb6876-0438-4961-a62a-68b41a676f17\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " Feb 16 21:20:00.450179 master-0 kubenswrapper[7926]: I0216 21:20:00.450090 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebeb6876-0438-4961-a62a-68b41a676f17-config-volume\") pod \"ebeb6876-0438-4961-a62a-68b41a676f17\" (UID: \"ebeb6876-0438-4961-a62a-68b41a676f17\") " Feb 16 21:20:00.451419 master-0 kubenswrapper[7926]: I0216 21:20:00.451378 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebeb6876-0438-4961-a62a-68b41a676f17-config-volume" (OuterVolumeSpecName: "config-volume") pod "ebeb6876-0438-4961-a62a-68b41a676f17" (UID: "ebeb6876-0438-4961-a62a-68b41a676f17"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:20:00.452964 master-0 kubenswrapper[7926]: I0216 21:20:00.452919 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebeb6876-0438-4961-a62a-68b41a676f17-kube-api-access-kh2rn" (OuterVolumeSpecName: "kube-api-access-kh2rn") pod "ebeb6876-0438-4961-a62a-68b41a676f17" (UID: "ebeb6876-0438-4961-a62a-68b41a676f17"). InnerVolumeSpecName "kube-api-access-kh2rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:20:00.453118 master-0 kubenswrapper[7926]: I0216 21:20:00.453069 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebeb6876-0438-4961-a62a-68b41a676f17-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ebeb6876-0438-4961-a62a-68b41a676f17" (UID: "ebeb6876-0438-4961-a62a-68b41a676f17"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:20:00.553093 master-0 kubenswrapper[7926]: I0216 21:20:00.552743 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.554032 master-0 kubenswrapper[7926]: I0216 21:20:00.553610 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:20:00.554231 master-0 kubenswrapper[7926]: I0216 21:20:00.554197 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.554391 master-0 kubenswrapper[7926]: I0216 21:20:00.554362 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh2rn\" (UniqueName: \"kubernetes.io/projected/ebeb6876-0438-4961-a62a-68b41a676f17-kube-api-access-kh2rn\") on node \"master-0\" DevicePath \"\"" Feb 16 21:20:00.554441 master-0 kubenswrapper[7926]: I0216 21:20:00.554400 7926 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebeb6876-0438-4961-a62a-68b41a676f17-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 21:20:00.554441 master-0 kubenswrapper[7926]: I0216 21:20:00.554421 7926 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebeb6876-0438-4961-a62a-68b41a676f17-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 21:20:00.555970 master-0 kubenswrapper[7926]: I0216 21:20:00.555916 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.557311 master-0 kubenswrapper[7926]: I0216 21:20:00.557281 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:20:00.557982 master-0 kubenswrapper[7926]: I0216 21:20:00.557935 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.778833 master-0 kubenswrapper[7926]: I0216 21:20:00.778796 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:20:00.797468 master-0 kubenswrapper[7926]: I0216 21:20:00.797434 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:20:00.812560 master-0 kubenswrapper[7926]: I0216 21:20:00.812468 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:20:00.812560 master-0 kubenswrapper[7926]: W0216 21:20:00.812522 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d6eb694_9a3d_49d1_bbc1_74ba4450d673.slice/crio-aed3d22aa5c102de3c056d7b1148ad38dc8f06e42bff2232e153f1a44338819c WatchSource:0}: Error finding container aed3d22aa5c102de3c056d7b1148ad38dc8f06e42bff2232e153f1a44338819c: Status 404 returned error can't find the container with id aed3d22aa5c102de3c056d7b1148ad38dc8f06e42bff2232e153f1a44338819c Feb 16 21:20:01.077789 master-0 kubenswrapper[7926]: I0216 21:20:01.077682 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ctvb2" event={"ID":"7d6eb694-9a3d-49d1-bbc1-74ba4450d673","Type":"ContainerStarted","Data":"aed3d22aa5c102de3c056d7b1148ad38dc8f06e42bff2232e153f1a44338819c"} Feb 16 21:20:01.079451 master-0 kubenswrapper[7926]: I0216 21:20:01.079416 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" event={"ID":"ebeb6876-0438-4961-a62a-68b41a676f17","Type":"ContainerDied","Data":"83b0c5c9f9e9a6aa803d0e80eca18b14e4ab78d1317a06af8dc1b57da3bbd755"} Feb 16 21:20:01.079523 master-0 kubenswrapper[7926]: I0216 21:20:01.079456 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83b0c5c9f9e9a6aa803d0e80eca18b14e4ab78d1317a06af8dc1b57da3bbd755" Feb 16 21:20:01.079523 master-0 kubenswrapper[7926]: I0216 21:20:01.079508 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:20:01.202676 master-0 kubenswrapper[7926]: W0216 21:20:01.202580 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7b30888_5994_4968_9db6_9533ac60c92e.slice/crio-98ea530a3e85a55d27f014bb670a7b7e4444aedc192a8b2618c4f1830394b65c WatchSource:0}: Error finding container 98ea530a3e85a55d27f014bb670a7b7e4444aedc192a8b2618c4f1830394b65c: Status 404 returned error can't find the container with id 98ea530a3e85a55d27f014bb670a7b7e4444aedc192a8b2618c4f1830394b65c Feb 16 21:20:01.217586 master-0 kubenswrapper[7926]: I0216 21:20:01.217506 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z"] Feb 16 21:20:01.273557 master-0 kubenswrapper[7926]: I0216 21:20:01.273146 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-n467n"] Feb 16 21:20:01.287896 master-0 kubenswrapper[7926]: W0216 21:20:01.287739 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9bd1f48_6d45_4045_b18e_46ce3005d51d.slice/crio-cb99eaa7ceffb734068bb188738c361f8400867f02f0acef09f3dcc317540b0e WatchSource:0}: Error finding container cb99eaa7ceffb734068bb188738c361f8400867f02f0acef09f3dcc317540b0e: Status 404 returned error can't find the container with id cb99eaa7ceffb734068bb188738c361f8400867f02f0acef09f3dcc317540b0e Feb 16 21:20:02.088852 master-0 kubenswrapper[7926]: I0216 21:20:02.088773 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" event={"ID":"e9bd1f48-6d45-4045-b18e-46ce3005d51d","Type":"ContainerStarted","Data":"cb99eaa7ceffb734068bb188738c361f8400867f02f0acef09f3dcc317540b0e"} Feb 16 21:20:02.091729 master-0 kubenswrapper[7926]: I0216 21:20:02.091679 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" event={"ID":"f7b30888-5994-4968-9db6-9533ac60c92e","Type":"ContainerStarted","Data":"9022c7d25901706a3a4753f177445a986f505ff90538968ff9843de9d6c65ab8"} Feb 16 21:20:02.091729 master-0 kubenswrapper[7926]: I0216 21:20:02.091713 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" event={"ID":"f7b30888-5994-4968-9db6-9533ac60c92e","Type":"ContainerStarted","Data":"9304d668e7785195dde35507d3b853217dd541218a54b7914dda3723dea0b360"} Feb 16 21:20:02.091729 master-0 kubenswrapper[7926]: I0216 21:20:02.091727 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" event={"ID":"f7b30888-5994-4968-9db6-9533ac60c92e","Type":"ContainerStarted","Data":"98ea530a3e85a55d27f014bb670a7b7e4444aedc192a8b2618c4f1830394b65c"} Feb 16 21:20:03.106460 master-0 kubenswrapper[7926]: I0216 21:20:03.105621 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ctvb2" event={"ID":"7d6eb694-9a3d-49d1-bbc1-74ba4450d673","Type":"ContainerStarted","Data":"35aeddbd3b02ea16608fbe6dfea1fa7dc35fe8b876f2fa1fba3cfd614e5815c0"} Feb 16 21:20:03.109599 master-0 kubenswrapper[7926]: I0216 21:20:03.109409 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" event={"ID":"e9bd1f48-6d45-4045-b18e-46ce3005d51d","Type":"ContainerStarted","Data":"ae4b728d26d2235e9c2481e97c712ffb552d7c0d29beb5a7141bb97993e8cb8c"} Feb 16 21:20:04.122417 master-0 kubenswrapper[7926]: I0216 21:20:04.122343 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" event={"ID":"f7b30888-5994-4968-9db6-9533ac60c92e","Type":"ContainerStarted","Data":"017b5416f64a5dc2aea1499757bc37cb7845a0c20f820608b04adf898a0fbb42"} Feb 16 21:20:04.125715 master-0 kubenswrapper[7926]: I0216 21:20:04.125610 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" event={"ID":"e9bd1f48-6d45-4045-b18e-46ce3005d51d","Type":"ContainerStarted","Data":"14a257c4d30feb322bf947d285b2761bc04202993600aeef5d6a83b601417e29"} Feb 16 21:20:04.125715 master-0 kubenswrapper[7926]: I0216 21:20:04.125682 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" event={"ID":"e9bd1f48-6d45-4045-b18e-46ce3005d51d","Type":"ContainerStarted","Data":"418ed93e2d97b302c27aa5bd16b20d2ee3b92954aa28e01a918f46e4ccd79241"} Feb 16 21:20:04.128181 master-0 kubenswrapper[7926]: I0216 21:20:04.128130 7926 generic.go:334] "Generic (PLEG): container finished" podID="7d6eb694-9a3d-49d1-bbc1-74ba4450d673" containerID="35aeddbd3b02ea16608fbe6dfea1fa7dc35fe8b876f2fa1fba3cfd614e5815c0" exitCode=0 Feb 16 21:20:04.128344 master-0 kubenswrapper[7926]: I0216 21:20:04.128187 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ctvb2" event={"ID":"7d6eb694-9a3d-49d1-bbc1-74ba4450d673","Type":"ContainerDied","Data":"35aeddbd3b02ea16608fbe6dfea1fa7dc35fe8b876f2fa1fba3cfd614e5815c0"} Feb 16 21:20:04.128344 master-0 kubenswrapper[7926]: I0216 21:20:04.128220 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ctvb2" event={"ID":"7d6eb694-9a3d-49d1-bbc1-74ba4450d673","Type":"ContainerStarted","Data":"3fa85c5bdf337a4669f23966505c1f564020ce2b287a6714bc11d7cbcb4be1af"} Feb 16 21:20:04.128344 master-0 kubenswrapper[7926]: I0216 21:20:04.128234 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ctvb2" event={"ID":"7d6eb694-9a3d-49d1-bbc1-74ba4450d673","Type":"ContainerStarted","Data":"6f6509f6290e5127bfe082132c0bf6a45571e4de7a324345b01c47d3586455c4"} Feb 16 21:20:04.143255 master-0 kubenswrapper[7926]: I0216 21:20:04.143179 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" podStartSLOduration=3.351864272 podStartE2EDuration="5.143164761s" podCreationTimestamp="2026-02-16 21:19:59 +0000 UTC" firstStartedPulling="2026-02-16 21:20:01.515585095 +0000 UTC m=+1373.150485395" lastFinishedPulling="2026-02-16 21:20:03.306885584 +0000 UTC m=+1374.941785884" observedRunningTime="2026-02-16 21:20:04.143036617 +0000 UTC m=+1375.777936997" watchObservedRunningTime="2026-02-16 21:20:04.143164761 +0000 UTC m=+1375.778065061" Feb 16 21:20:04.166587 master-0 kubenswrapper[7926]: I0216 21:20:04.166407 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" podStartSLOduration=3.720817349 podStartE2EDuration="5.166389236s" podCreationTimestamp="2026-02-16 21:19:59 +0000 UTC" firstStartedPulling="2026-02-16 21:20:01.29086455 +0000 UTC m=+1372.925764850" lastFinishedPulling="2026-02-16 21:20:02.736436447 +0000 UTC m=+1374.371336737" observedRunningTime="2026-02-16 21:20:04.163163832 +0000 UTC m=+1375.798064132" watchObservedRunningTime="2026-02-16 21:20:04.166389236 +0000 UTC m=+1375.801289546" Feb 16 21:20:04.208695 master-0 kubenswrapper[7926]: I0216 21:20:04.203744 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-ctvb2" podStartSLOduration=3.324588219 podStartE2EDuration="5.203720091s" podCreationTimestamp="2026-02-16 21:19:59 +0000 UTC" firstStartedPulling="2026-02-16 21:20:00.814549919 +0000 UTC m=+1372.449450219" lastFinishedPulling="2026-02-16 21:20:02.693681771 +0000 UTC m=+1374.328582091" observedRunningTime="2026-02-16 21:20:04.200178788 +0000 UTC m=+1375.835079088" watchObservedRunningTime="2026-02-16 21:20:04.203720091 +0000 UTC m=+1375.838620391" Feb 16 21:20:05.210880 master-0 kubenswrapper[7926]: I0216 21:20:05.210819 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-76c9c896c-pz2bk"] Feb 16 21:20:05.211643 master-0 kubenswrapper[7926]: E0216 21:20:05.211188 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebeb6876-0438-4961-a62a-68b41a676f17" containerName="collect-profiles" Feb 16 21:20:05.211643 master-0 kubenswrapper[7926]: I0216 21:20:05.211206 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebeb6876-0438-4961-a62a-68b41a676f17" containerName="collect-profiles" Feb 16 21:20:05.211643 master-0 kubenswrapper[7926]: I0216 21:20:05.211359 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebeb6876-0438-4961-a62a-68b41a676f17" containerName="collect-profiles" Feb 16 21:20:05.211983 master-0 kubenswrapper[7926]: I0216 21:20:05.211950 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.214508 master-0 kubenswrapper[7926]: I0216 21:20:05.214464 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-4brnj" Feb 16 21:20:05.214720 master-0 kubenswrapper[7926]: I0216 21:20:05.214578 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 21:20:05.214720 master-0 kubenswrapper[7926]: I0216 21:20:05.214605 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 21:20:05.214720 master-0 kubenswrapper[7926]: I0216 21:20:05.214619 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 21:20:05.214720 master-0 kubenswrapper[7926]: I0216 21:20:05.214643 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 21:20:05.215010 master-0 kubenswrapper[7926]: I0216 21:20:05.214908 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-6thqgv1l637aa" Feb 16 21:20:05.223986 master-0 kubenswrapper[7926]: I0216 21:20:05.223807 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.223986 master-0 kubenswrapper[7926]: I0216 21:20:05.223849 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4a9f4f96-ca31-4959-93fe-c094caf8e077-audit-log\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.223986 master-0 kubenswrapper[7926]: I0216 21:20:05.223876 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.223986 master-0 kubenswrapper[7926]: I0216 21:20:05.223951 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.225142 master-0 kubenswrapper[7926]: I0216 21:20:05.224025 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.225142 master-0 kubenswrapper[7926]: I0216 21:20:05.224045 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.225142 master-0 kubenswrapper[7926]: I0216 21:20:05.224063 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrc4z\" (UniqueName: \"kubernetes.io/projected/4a9f4f96-ca31-4959-93fe-c094caf8e077-kube-api-access-xrc4z\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.225142 master-0 kubenswrapper[7926]: I0216 21:20:05.224213 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-76c9c896c-pz2bk"] Feb 16 21:20:05.325266 master-0 kubenswrapper[7926]: I0216 21:20:05.325204 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.325266 master-0 kubenswrapper[7926]: I0216 21:20:05.325268 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.325527 master-0 kubenswrapper[7926]: I0216 21:20:05.325409 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.325527 master-0 kubenswrapper[7926]: I0216 21:20:05.325434 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrc4z\" (UniqueName: \"kubernetes.io/projected/4a9f4f96-ca31-4959-93fe-c094caf8e077-kube-api-access-xrc4z\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.325527 master-0 kubenswrapper[7926]: I0216 21:20:05.325492 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.325527 master-0 kubenswrapper[7926]: I0216 21:20:05.325513 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4a9f4f96-ca31-4959-93fe-c094caf8e077-audit-log\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.325688 master-0 kubenswrapper[7926]: I0216 21:20:05.325531 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.326676 master-0 kubenswrapper[7926]: I0216 21:20:05.326561 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4a9f4f96-ca31-4959-93fe-c094caf8e077-audit-log\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.326676 master-0 kubenswrapper[7926]: I0216 21:20:05.326636 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.327392 master-0 kubenswrapper[7926]: I0216 21:20:05.327330 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.329006 master-0 kubenswrapper[7926]: I0216 21:20:05.328923 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.329510 master-0 kubenswrapper[7926]: I0216 21:20:05.329470 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.329626 master-0 kubenswrapper[7926]: I0216 21:20:05.329596 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.342604 master-0 kubenswrapper[7926]: I0216 21:20:05.342557 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrc4z\" (UniqueName: \"kubernetes.io/projected/4a9f4f96-ca31-4959-93fe-c094caf8e077-kube-api-access-xrc4z\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.537280 master-0 kubenswrapper[7926]: I0216 21:20:05.537194 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:05.978776 master-0 kubenswrapper[7926]: I0216 21:20:05.978600 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-76c9c896c-pz2bk"] Feb 16 21:20:05.981155 master-0 kubenswrapper[7926]: W0216 21:20:05.980979 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a9f4f96_ca31_4959_93fe_c094caf8e077.slice/crio-b4ab6f7d6521695677ac09385923bea0cfde2c320361c5f6cbe98ce64b7475b2 WatchSource:0}: Error finding container b4ab6f7d6521695677ac09385923bea0cfde2c320361c5f6cbe98ce64b7475b2: Status 404 returned error can't find the container with id b4ab6f7d6521695677ac09385923bea0cfde2c320361c5f6cbe98ce64b7475b2 Feb 16 21:20:06.140912 master-0 kubenswrapper[7926]: I0216 21:20:06.140866 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" event={"ID":"4a9f4f96-ca31-4959-93fe-c094caf8e077","Type":"ContainerStarted","Data":"b4ab6f7d6521695677ac09385923bea0cfde2c320361c5f6cbe98ce64b7475b2"} Feb 16 21:20:09.166025 master-0 kubenswrapper[7926]: I0216 21:20:09.165948 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" event={"ID":"4a9f4f96-ca31-4959-93fe-c094caf8e077","Type":"ContainerStarted","Data":"717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7"} Feb 16 21:20:09.195308 master-0 kubenswrapper[7926]: I0216 21:20:09.195242 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" podStartSLOduration=1.5289735740000001 podStartE2EDuration="4.19522262s" podCreationTimestamp="2026-02-16 21:20:05 +0000 UTC" firstStartedPulling="2026-02-16 21:20:05.98507789 +0000 UTC m=+1377.619978220" lastFinishedPulling="2026-02-16 21:20:08.651326946 +0000 UTC m=+1380.286227266" observedRunningTime="2026-02-16 21:20:09.192243993 +0000 UTC m=+1380.827144373" watchObservedRunningTime="2026-02-16 21:20:09.19522262 +0000 UTC m=+1380.830122920" Feb 16 21:20:11.185692 master-0 kubenswrapper[7926]: I0216 21:20:11.185541 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/6.log" Feb 16 21:20:11.186887 master-0 kubenswrapper[7926]: I0216 21:20:11.186272 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/5.log" Feb 16 21:20:11.186887 master-0 kubenswrapper[7926]: I0216 21:20:11.186801 7926 generic.go:334] "Generic (PLEG): container finished" podID="cef33294-81fb-41a2-811d-2565f94514d1" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" exitCode=1 Feb 16 21:20:11.186887 master-0 kubenswrapper[7926]: I0216 21:20:11.186858 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerDied","Data":"cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f"} Feb 16 21:20:11.187125 master-0 kubenswrapper[7926]: I0216 21:20:11.186928 7926 scope.go:117] "RemoveContainer" containerID="a536172006966fa7da41ae7ff0c679f29f5343cacc6f612c4fa109bc18f3bbce" Feb 16 21:20:11.187604 master-0 kubenswrapper[7926]: I0216 21:20:11.187549 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:20:11.187953 master-0 kubenswrapper[7926]: E0216 21:20:11.187904 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:20:12.194978 master-0 kubenswrapper[7926]: I0216 21:20:12.194892 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/6.log" Feb 16 21:20:20.266744 master-0 kubenswrapper[7926]: I0216 21:20:20.266591 7926 generic.go:334] "Generic (PLEG): container finished" podID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerID="2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8" exitCode=0 Feb 16 21:20:20.266744 master-0 kubenswrapper[7926]: I0216 21:20:20.266726 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerDied","Data":"2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8"} Feb 16 21:20:20.267490 master-0 kubenswrapper[7926]: I0216 21:20:20.266808 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerStarted","Data":"7eb9d606c0ba4432a3c104c5bb2952f3efa3dee4e29f1c0d81a5b0db607ceac8"} Feb 16 21:20:20.267490 master-0 kubenswrapper[7926]: I0216 21:20:20.266847 7926 scope.go:117] "RemoveContainer" containerID="998a9ae2beb3b1a75e1664da2f38a4c4498101aa5035a2ceca565eb8eafef20a" Feb 16 21:20:21.183329 master-0 kubenswrapper[7926]: I0216 21:20:21.183230 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:20:21.188625 master-0 kubenswrapper[7926]: I0216 21:20:21.188565 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:21.188625 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:21.188625 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:21.188625 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:21.188867 master-0 kubenswrapper[7926]: I0216 21:20:21.188674 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:22.185360 master-0 kubenswrapper[7926]: I0216 21:20:22.185281 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:22.185360 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:22.185360 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:22.185360 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:22.186348 master-0 kubenswrapper[7926]: I0216 21:20:22.185380 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:23.184749 master-0 kubenswrapper[7926]: I0216 21:20:23.184678 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:23.184749 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:23.184749 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:23.184749 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:23.185140 master-0 kubenswrapper[7926]: I0216 21:20:23.184764 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:24.187188 master-0 kubenswrapper[7926]: I0216 21:20:24.187064 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:24.187188 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:24.187188 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:24.187188 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:24.188503 master-0 kubenswrapper[7926]: I0216 21:20:24.187214 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:24.740843 master-0 kubenswrapper[7926]: I0216 21:20:24.740785 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:20:24.742211 master-0 kubenswrapper[7926]: E0216 21:20:24.742119 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:20:25.182804 master-0 kubenswrapper[7926]: I0216 21:20:25.182732 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:20:25.185729 master-0 kubenswrapper[7926]: I0216 21:20:25.185690 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:25.185729 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:25.185729 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:25.185729 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:25.186164 master-0 kubenswrapper[7926]: I0216 21:20:25.185741 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:25.538816 master-0 kubenswrapper[7926]: I0216 21:20:25.538734 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:25.538816 master-0 kubenswrapper[7926]: I0216 21:20:25.538805 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:26.185597 master-0 kubenswrapper[7926]: I0216 21:20:26.185510 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:26.185597 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:26.185597 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:26.185597 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:26.186061 master-0 kubenswrapper[7926]: I0216 21:20:26.185614 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:27.186329 master-0 kubenswrapper[7926]: I0216 21:20:27.186245 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:27.186329 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:27.186329 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:27.186329 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:27.187042 master-0 kubenswrapper[7926]: I0216 21:20:27.186375 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:28.185236 master-0 kubenswrapper[7926]: I0216 21:20:28.185123 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:28.185236 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:28.185236 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:28.185236 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:28.185729 master-0 kubenswrapper[7926]: I0216 21:20:28.185273 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:29.184676 master-0 kubenswrapper[7926]: I0216 21:20:29.184601 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:29.184676 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:29.184676 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:29.184676 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:29.185265 master-0 kubenswrapper[7926]: I0216 21:20:29.185241 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:30.185115 master-0 kubenswrapper[7926]: I0216 21:20:30.185054 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:30.185115 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:30.185115 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:30.185115 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:30.185115 master-0 kubenswrapper[7926]: I0216 21:20:30.185113 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:31.185346 master-0 kubenswrapper[7926]: I0216 21:20:31.185213 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:31.185346 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:31.185346 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:31.185346 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:31.185346 master-0 kubenswrapper[7926]: I0216 21:20:31.185280 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:32.185905 master-0 kubenswrapper[7926]: I0216 21:20:32.185800 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:32.185905 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:32.185905 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:32.185905 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:32.185905 master-0 kubenswrapper[7926]: I0216 21:20:32.185879 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:33.185415 master-0 kubenswrapper[7926]: I0216 21:20:33.185297 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:33.185415 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:33.185415 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:33.185415 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:33.185415 master-0 kubenswrapper[7926]: I0216 21:20:33.185387 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:34.186448 master-0 kubenswrapper[7926]: I0216 21:20:34.186332 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:34.186448 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:34.186448 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:34.186448 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:34.187068 master-0 kubenswrapper[7926]: I0216 21:20:34.186490 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:35.184986 master-0 kubenswrapper[7926]: I0216 21:20:35.184886 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:35.184986 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:35.184986 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:35.184986 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:35.185410 master-0 kubenswrapper[7926]: I0216 21:20:35.184982 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:36.186438 master-0 kubenswrapper[7926]: I0216 21:20:36.186334 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:36.186438 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:36.186438 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:36.186438 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:36.187153 master-0 kubenswrapper[7926]: I0216 21:20:36.186462 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:37.185059 master-0 kubenswrapper[7926]: I0216 21:20:37.184953 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:37.185059 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:37.185059 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:37.185059 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:37.185399 master-0 kubenswrapper[7926]: I0216 21:20:37.185060 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:38.186260 master-0 kubenswrapper[7926]: I0216 21:20:38.186140 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:38.186260 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:38.186260 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:38.186260 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:38.186260 master-0 kubenswrapper[7926]: I0216 21:20:38.186234 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:38.741761 master-0 kubenswrapper[7926]: I0216 21:20:38.741703 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:20:38.742100 master-0 kubenswrapper[7926]: E0216 21:20:38.741953 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:20:39.184860 master-0 kubenswrapper[7926]: I0216 21:20:39.184798 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:39.184860 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:39.184860 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:39.184860 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:39.185158 master-0 kubenswrapper[7926]: I0216 21:20:39.184866 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:40.184637 master-0 kubenswrapper[7926]: I0216 21:20:40.184551 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:40.184637 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:40.184637 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:40.184637 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:40.185578 master-0 kubenswrapper[7926]: I0216 21:20:40.184633 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:41.184926 master-0 kubenswrapper[7926]: I0216 21:20:41.184847 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:41.184926 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:41.184926 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:41.184926 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:41.184926 master-0 kubenswrapper[7926]: I0216 21:20:41.184912 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:42.184668 master-0 kubenswrapper[7926]: I0216 21:20:42.184601 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:42.184668 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:42.184668 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:42.184668 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:42.185394 master-0 kubenswrapper[7926]: I0216 21:20:42.184673 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:43.185886 master-0 kubenswrapper[7926]: I0216 21:20:43.185815 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:43.185886 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:43.185886 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:43.185886 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:43.185886 master-0 kubenswrapper[7926]: I0216 21:20:43.185877 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:44.185151 master-0 kubenswrapper[7926]: I0216 21:20:44.185083 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:44.185151 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:44.185151 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:44.185151 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:44.185476 master-0 kubenswrapper[7926]: I0216 21:20:44.185147 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:45.184388 master-0 kubenswrapper[7926]: I0216 21:20:45.184303 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:45.184388 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:45.184388 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:45.184388 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:45.184388 master-0 kubenswrapper[7926]: I0216 21:20:45.184383 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:45.544397 master-0 kubenswrapper[7926]: I0216 21:20:45.544333 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:45.549040 master-0 kubenswrapper[7926]: I0216 21:20:45.548994 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:20:46.185385 master-0 kubenswrapper[7926]: I0216 21:20:46.185286 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:46.185385 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:46.185385 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:46.185385 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:46.186463 master-0 kubenswrapper[7926]: I0216 21:20:46.185397 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:47.185268 master-0 kubenswrapper[7926]: I0216 21:20:47.185182 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:47.185268 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:47.185268 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:47.185268 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:47.185268 master-0 kubenswrapper[7926]: I0216 21:20:47.185251 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:48.186319 master-0 kubenswrapper[7926]: I0216 21:20:48.186265 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:48.186319 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:48.186319 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:48.186319 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:48.187992 master-0 kubenswrapper[7926]: I0216 21:20:48.186994 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:49.185378 master-0 kubenswrapper[7926]: I0216 21:20:49.185288 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:49.185378 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:49.185378 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:49.185378 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:49.185924 master-0 kubenswrapper[7926]: I0216 21:20:49.185416 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:50.185406 master-0 kubenswrapper[7926]: I0216 21:20:50.185358 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:50.185406 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:50.185406 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:50.185406 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:50.186968 master-0 kubenswrapper[7926]: I0216 21:20:50.186495 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:51.185785 master-0 kubenswrapper[7926]: I0216 21:20:51.185722 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:51.185785 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:51.185785 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:51.185785 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:51.186798 master-0 kubenswrapper[7926]: I0216 21:20:51.185792 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:51.739206 master-0 kubenswrapper[7926]: I0216 21:20:51.739108 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:20:51.739574 master-0 kubenswrapper[7926]: E0216 21:20:51.739511 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:20:52.184969 master-0 kubenswrapper[7926]: I0216 21:20:52.184902 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:52.184969 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:52.184969 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:52.184969 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:52.184969 master-0 kubenswrapper[7926]: I0216 21:20:52.184970 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:53.185929 master-0 kubenswrapper[7926]: I0216 21:20:53.185806 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:53.185929 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:53.185929 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:53.185929 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:53.185929 master-0 kubenswrapper[7926]: I0216 21:20:53.185890 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:54.185090 master-0 kubenswrapper[7926]: I0216 21:20:54.185000 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:54.185090 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:54.185090 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:54.185090 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:54.185537 master-0 kubenswrapper[7926]: I0216 21:20:54.185100 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:55.185816 master-0 kubenswrapper[7926]: I0216 21:20:55.185725 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:55.185816 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:55.185816 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:55.185816 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:55.185816 master-0 kubenswrapper[7926]: I0216 21:20:55.185809 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:56.185009 master-0 kubenswrapper[7926]: I0216 21:20:56.184938 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:56.185009 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:56.185009 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:56.185009 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:56.185357 master-0 kubenswrapper[7926]: I0216 21:20:56.185009 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:57.184601 master-0 kubenswrapper[7926]: I0216 21:20:57.184524 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:57.184601 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:57.184601 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:57.184601 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:57.185580 master-0 kubenswrapper[7926]: I0216 21:20:57.184613 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:58.185399 master-0 kubenswrapper[7926]: I0216 21:20:58.185329 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:58.185399 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:58.185399 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:58.185399 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:58.186022 master-0 kubenswrapper[7926]: I0216 21:20:58.185414 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:20:59.186035 master-0 kubenswrapper[7926]: I0216 21:20:59.185969 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:20:59.186035 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:20:59.186035 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:20:59.186035 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:20:59.186752 master-0 kubenswrapper[7926]: I0216 21:20:59.186037 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:00.184879 master-0 kubenswrapper[7926]: I0216 21:21:00.184815 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:00.184879 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:00.184879 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:00.184879 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:00.184879 master-0 kubenswrapper[7926]: I0216 21:21:00.184879 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:01.185805 master-0 kubenswrapper[7926]: I0216 21:21:01.185737 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:01.185805 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:01.185805 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:01.185805 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:01.186404 master-0 kubenswrapper[7926]: I0216 21:21:01.185832 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:02.185140 master-0 kubenswrapper[7926]: I0216 21:21:02.184942 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:02.185140 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:02.185140 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:02.185140 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:02.185140 master-0 kubenswrapper[7926]: I0216 21:21:02.185023 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:03.186212 master-0 kubenswrapper[7926]: I0216 21:21:03.186126 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:03.186212 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:03.186212 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:03.186212 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:03.187078 master-0 kubenswrapper[7926]: I0216 21:21:03.186243 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:03.739063 master-0 kubenswrapper[7926]: I0216 21:21:03.739005 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:21:03.739719 master-0 kubenswrapper[7926]: E0216 21:21:03.739695 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:21:04.185797 master-0 kubenswrapper[7926]: I0216 21:21:04.185739 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:04.185797 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:04.185797 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:04.185797 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:04.186131 master-0 kubenswrapper[7926]: I0216 21:21:04.185806 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:05.184096 master-0 kubenswrapper[7926]: I0216 21:21:05.184037 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:05.184096 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:05.184096 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:05.184096 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:05.184842 master-0 kubenswrapper[7926]: I0216 21:21:05.184096 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:06.184791 master-0 kubenswrapper[7926]: I0216 21:21:06.184714 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:06.184791 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:06.184791 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:06.184791 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:06.184791 master-0 kubenswrapper[7926]: I0216 21:21:06.184808 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:07.184704 master-0 kubenswrapper[7926]: I0216 21:21:07.184507 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:07.184704 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:07.184704 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:07.184704 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:07.185757 master-0 kubenswrapper[7926]: I0216 21:21:07.184756 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:08.185199 master-0 kubenswrapper[7926]: I0216 21:21:08.185125 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:08.185199 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:08.185199 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:08.185199 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:08.185867 master-0 kubenswrapper[7926]: I0216 21:21:08.185229 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:09.185585 master-0 kubenswrapper[7926]: I0216 21:21:09.185405 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:09.185585 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:09.185585 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:09.185585 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:09.186770 master-0 kubenswrapper[7926]: I0216 21:21:09.185600 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:10.185931 master-0 kubenswrapper[7926]: I0216 21:21:10.185737 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:10.185931 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:10.185931 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:10.185931 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:10.185931 master-0 kubenswrapper[7926]: I0216 21:21:10.185837 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:11.185495 master-0 kubenswrapper[7926]: I0216 21:21:11.185366 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:11.185495 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:11.185495 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:11.185495 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:11.186718 master-0 kubenswrapper[7926]: I0216 21:21:11.185493 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:12.186116 master-0 kubenswrapper[7926]: I0216 21:21:12.186017 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:12.186116 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:12.186116 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:12.186116 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:12.187025 master-0 kubenswrapper[7926]: I0216 21:21:12.186126 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:13.186951 master-0 kubenswrapper[7926]: I0216 21:21:13.186860 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:13.186951 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:13.186951 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:13.186951 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:13.188416 master-0 kubenswrapper[7926]: I0216 21:21:13.188356 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:14.184772 master-0 kubenswrapper[7926]: I0216 21:21:14.184682 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:14.184772 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:14.184772 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:14.184772 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:14.185066 master-0 kubenswrapper[7926]: I0216 21:21:14.184806 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:15.185986 master-0 kubenswrapper[7926]: I0216 21:21:15.185903 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:15.185986 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:15.185986 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:15.185986 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:15.185986 master-0 kubenswrapper[7926]: I0216 21:21:15.185970 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:15.739999 master-0 kubenswrapper[7926]: I0216 21:21:15.739901 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:21:15.740455 master-0 kubenswrapper[7926]: E0216 21:21:15.740391 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:21:16.191482 master-0 kubenswrapper[7926]: I0216 21:21:16.191390 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:16.191482 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:16.191482 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:16.191482 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:16.192596 master-0 kubenswrapper[7926]: I0216 21:21:16.191506 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:17.185860 master-0 kubenswrapper[7926]: I0216 21:21:17.185743 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:17.185860 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:17.185860 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:17.185860 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:17.186511 master-0 kubenswrapper[7926]: I0216 21:21:17.185867 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:18.186554 master-0 kubenswrapper[7926]: I0216 21:21:18.186435 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:18.186554 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:18.186554 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:18.186554 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:18.186554 master-0 kubenswrapper[7926]: I0216 21:21:18.186546 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:19.185841 master-0 kubenswrapper[7926]: I0216 21:21:19.185745 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:19.185841 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:19.185841 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:19.185841 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:19.186187 master-0 kubenswrapper[7926]: I0216 21:21:19.185840 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:20.184983 master-0 kubenswrapper[7926]: I0216 21:21:20.184871 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:20.184983 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:20.184983 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:20.184983 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:20.184983 master-0 kubenswrapper[7926]: I0216 21:21:20.184968 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:21.185276 master-0 kubenswrapper[7926]: I0216 21:21:21.185197 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:21.185276 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:21.185276 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:21.185276 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:21.186074 master-0 kubenswrapper[7926]: I0216 21:21:21.185279 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:22.186833 master-0 kubenswrapper[7926]: I0216 21:21:22.186701 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:22.186833 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:22.186833 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:22.186833 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:22.187818 master-0 kubenswrapper[7926]: I0216 21:21:22.186835 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:23.184898 master-0 kubenswrapper[7926]: I0216 21:21:23.184834 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:23.184898 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:23.184898 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:23.184898 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:23.185190 master-0 kubenswrapper[7926]: I0216 21:21:23.184905 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:24.186322 master-0 kubenswrapper[7926]: I0216 21:21:24.186183 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:24.186322 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:24.186322 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:24.186322 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:24.187934 master-0 kubenswrapper[7926]: I0216 21:21:24.186339 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:25.185237 master-0 kubenswrapper[7926]: I0216 21:21:25.185076 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:25.185237 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:25.185237 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:25.185237 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:25.185966 master-0 kubenswrapper[7926]: I0216 21:21:25.185247 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:26.185466 master-0 kubenswrapper[7926]: I0216 21:21:26.185400 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:26.185466 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:26.185466 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:26.185466 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:26.186699 master-0 kubenswrapper[7926]: I0216 21:21:26.186142 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:27.184419 master-0 kubenswrapper[7926]: I0216 21:21:27.184344 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:27.184419 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:27.184419 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:27.184419 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:27.184419 master-0 kubenswrapper[7926]: I0216 21:21:27.184415 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:28.278765 master-0 kubenswrapper[7926]: I0216 21:21:28.278708 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:28.278765 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:28.278765 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:28.278765 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:28.279339 master-0 kubenswrapper[7926]: I0216 21:21:28.278786 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:28.745902 master-0 kubenswrapper[7926]: I0216 21:21:28.745713 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:21:28.746311 master-0 kubenswrapper[7926]: E0216 21:21:28.746252 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:21:29.185620 master-0 kubenswrapper[7926]: I0216 21:21:29.185527 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:29.185620 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:29.185620 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:29.185620 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:29.185932 master-0 kubenswrapper[7926]: I0216 21:21:29.185622 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:30.186794 master-0 kubenswrapper[7926]: I0216 21:21:30.186631 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:30.186794 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:30.186794 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:30.186794 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:30.188130 master-0 kubenswrapper[7926]: I0216 21:21:30.186800 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:31.185365 master-0 kubenswrapper[7926]: I0216 21:21:31.185318 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:31.185365 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:31.185365 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:31.185365 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:31.185365 master-0 kubenswrapper[7926]: I0216 21:21:31.185380 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:32.184260 master-0 kubenswrapper[7926]: I0216 21:21:32.184194 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:32.184260 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:32.184260 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:32.184260 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:32.184260 master-0 kubenswrapper[7926]: I0216 21:21:32.184254 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:33.185574 master-0 kubenswrapper[7926]: I0216 21:21:33.185504 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:33.185574 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:33.185574 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:33.185574 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:33.186175 master-0 kubenswrapper[7926]: I0216 21:21:33.185595 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:34.185798 master-0 kubenswrapper[7926]: I0216 21:21:34.185710 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:34.185798 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:34.185798 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:34.185798 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:34.186894 master-0 kubenswrapper[7926]: I0216 21:21:34.185828 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:35.185102 master-0 kubenswrapper[7926]: I0216 21:21:35.185047 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:35.185102 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:35.185102 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:35.185102 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:35.185512 master-0 kubenswrapper[7926]: I0216 21:21:35.185111 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:36.184929 master-0 kubenswrapper[7926]: I0216 21:21:36.184866 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:36.184929 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:36.184929 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:36.184929 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:36.185502 master-0 kubenswrapper[7926]: I0216 21:21:36.184937 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:37.185855 master-0 kubenswrapper[7926]: I0216 21:21:37.185773 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:37.185855 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:37.185855 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:37.185855 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:37.185855 master-0 kubenswrapper[7926]: I0216 21:21:37.185844 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:38.185768 master-0 kubenswrapper[7926]: I0216 21:21:38.185688 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:38.185768 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:38.185768 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:38.185768 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:38.185768 master-0 kubenswrapper[7926]: I0216 21:21:38.185750 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:39.185883 master-0 kubenswrapper[7926]: I0216 21:21:39.185778 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:39.185883 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:39.185883 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:39.185883 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:39.187139 master-0 kubenswrapper[7926]: I0216 21:21:39.185896 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:39.738597 master-0 kubenswrapper[7926]: I0216 21:21:39.738479 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:21:39.738830 master-0 kubenswrapper[7926]: E0216 21:21:39.738799 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:21:40.184429 master-0 kubenswrapper[7926]: I0216 21:21:40.184374 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:40.184429 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:40.184429 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:40.184429 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:40.184761 master-0 kubenswrapper[7926]: I0216 21:21:40.184441 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:41.185912 master-0 kubenswrapper[7926]: I0216 21:21:41.185819 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:41.185912 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:41.185912 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:41.185912 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:41.186652 master-0 kubenswrapper[7926]: I0216 21:21:41.185938 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:42.185917 master-0 kubenswrapper[7926]: I0216 21:21:42.185824 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:42.185917 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:42.185917 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:42.185917 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:42.186450 master-0 kubenswrapper[7926]: I0216 21:21:42.185971 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:43.185014 master-0 kubenswrapper[7926]: I0216 21:21:43.184952 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:43.185014 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:43.185014 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:43.185014 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:43.185624 master-0 kubenswrapper[7926]: I0216 21:21:43.185580 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:44.185534 master-0 kubenswrapper[7926]: I0216 21:21:44.185484 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:44.185534 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:44.185534 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:44.185534 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:44.186579 master-0 kubenswrapper[7926]: I0216 21:21:44.185535 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:45.186341 master-0 kubenswrapper[7926]: I0216 21:21:45.186243 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:45.186341 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:45.186341 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:45.186341 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:45.187089 master-0 kubenswrapper[7926]: I0216 21:21:45.186347 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:46.184781 master-0 kubenswrapper[7926]: I0216 21:21:46.184682 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:46.184781 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:46.184781 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:46.184781 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:46.185187 master-0 kubenswrapper[7926]: I0216 21:21:46.184783 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:47.185271 master-0 kubenswrapper[7926]: I0216 21:21:47.185214 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:47.185271 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:47.185271 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:47.185271 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:47.185917 master-0 kubenswrapper[7926]: I0216 21:21:47.185274 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:48.185071 master-0 kubenswrapper[7926]: I0216 21:21:48.185008 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:48.185071 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:48.185071 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:48.185071 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:48.185806 master-0 kubenswrapper[7926]: I0216 21:21:48.185078 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:49.185492 master-0 kubenswrapper[7926]: I0216 21:21:49.185406 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:49.185492 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:49.185492 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:49.185492 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:49.186519 master-0 kubenswrapper[7926]: I0216 21:21:49.185552 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:50.186076 master-0 kubenswrapper[7926]: I0216 21:21:50.185983 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:50.186076 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:50.186076 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:50.186076 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:50.186076 master-0 kubenswrapper[7926]: I0216 21:21:50.186084 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:51.185322 master-0 kubenswrapper[7926]: I0216 21:21:51.185195 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:51.185322 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:51.185322 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:51.185322 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:51.185633 master-0 kubenswrapper[7926]: I0216 21:21:51.185337 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:52.185494 master-0 kubenswrapper[7926]: I0216 21:21:52.185343 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:52.185494 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:52.185494 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:52.185494 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:52.186865 master-0 kubenswrapper[7926]: I0216 21:21:52.185509 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:53.186164 master-0 kubenswrapper[7926]: I0216 21:21:53.186071 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:53.186164 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:53.186164 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:53.186164 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:53.187025 master-0 kubenswrapper[7926]: I0216 21:21:53.186205 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:54.186054 master-0 kubenswrapper[7926]: I0216 21:21:54.185935 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:54.186054 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:54.186054 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:54.186054 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:54.186986 master-0 kubenswrapper[7926]: I0216 21:21:54.186092 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:54.739237 master-0 kubenswrapper[7926]: I0216 21:21:54.739156 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:21:54.739899 master-0 kubenswrapper[7926]: E0216 21:21:54.739838 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:21:55.185079 master-0 kubenswrapper[7926]: I0216 21:21:55.184999 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:55.185079 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:55.185079 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:55.185079 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:55.185495 master-0 kubenswrapper[7926]: I0216 21:21:55.185100 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:55.561132 master-0 kubenswrapper[7926]: I0216 21:21:55.561089 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/6.log" Feb 16 21:21:55.762353 master-0 kubenswrapper[7926]: I0216 21:21:55.762213 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/7.log" Feb 16 21:21:55.956832 master-0 kubenswrapper[7926]: I0216 21:21:55.955962 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-864ddd5f56-z4bnk_c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/router/4.log" Feb 16 21:21:56.156192 master-0 kubenswrapper[7926]: I0216 21:21:56.156162 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-864ddd5f56-z4bnk_c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/router/5.log" Feb 16 21:21:56.184972 master-0 kubenswrapper[7926]: I0216 21:21:56.184929 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:56.184972 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:56.184972 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:56.184972 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:56.185194 master-0 kubenswrapper[7926]: I0216 21:21:56.184986 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:56.352873 master-0 kubenswrapper[7926]: I0216 21:21:56.352842 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-64f7f8746f-xj7z6_bd49e653-3b42-4950-8f5f-2b2ecb683678/fix-audit-permissions/0.log" Feb 16 21:21:56.558302 master-0 kubenswrapper[7926]: I0216 21:21:56.558214 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-64f7f8746f-xj7z6_bd49e653-3b42-4950-8f5f-2b2ecb683678/oauth-apiserver/0.log" Feb 16 21:21:56.753793 master-0 kubenswrapper[7926]: I0216 21:21:56.753755 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/4.log" Feb 16 21:21:56.955940 master-0 kubenswrapper[7926]: I0216 21:21:56.955872 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/5.log" Feb 16 21:21:57.152183 master-0 kubenswrapper[7926]: I0216 21:21:57.152047 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/setup/0.log" Feb 16 21:21:57.184465 master-0 kubenswrapper[7926]: I0216 21:21:57.184414 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:57.184465 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:57.184465 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:57.184465 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:57.184678 master-0 kubenswrapper[7926]: I0216 21:21:57.184479 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:57.358642 master-0 kubenswrapper[7926]: I0216 21:21:57.358511 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/kube-apiserver/0.log" Feb 16 21:21:57.553139 master-0 kubenswrapper[7926]: I0216 21:21:57.553060 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5d1e91e5a1fed5cf7076a92d2830d36f/kube-apiserver-insecure-readyz/0.log" Feb 16 21:21:57.755289 master-0 kubenswrapper[7926]: I0216 21:21:57.754851 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965/installer/0.log" Feb 16 21:21:57.955073 master-0 kubenswrapper[7926]: I0216 21:21:57.954928 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_b09d3c16-18e3-45b3-9d39-949d2464b300/installer/0.log" Feb 16 21:21:58.155721 master-0 kubenswrapper[7926]: I0216 21:21:58.155610 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/5.log" Feb 16 21:21:58.185442 master-0 kubenswrapper[7926]: I0216 21:21:58.185365 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:58.185442 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:58.185442 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:58.185442 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:58.185789 master-0 kubenswrapper[7926]: I0216 21:21:58.185474 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:58.356765 master-0 kubenswrapper[7926]: I0216 21:21:58.356689 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/6.log" Feb 16 21:21:58.558109 master-0 kubenswrapper[7926]: I0216 21:21:58.558053 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_80420f2e7c3cdda71f7d0d6ccbe6f9f3/kube-controller-manager/7.log" Feb 16 21:21:58.751744 master-0 kubenswrapper[7926]: I0216 21:21:58.751637 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_80420f2e7c3cdda71f7d0d6ccbe6f9f3/cluster-policy-controller/3.log" Feb 16 21:21:58.958967 master-0 kubenswrapper[7926]: I0216 21:21:58.958902 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_80420f2e7c3cdda71f7d0d6ccbe6f9f3/kube-controller-manager/8.log" Feb 16 21:21:59.027154 master-0 kubenswrapper[7926]: I0216 21:21:59.027029 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 16 21:21:59.027773 master-0 kubenswrapper[7926]: I0216 21:21:59.027746 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:21:59.029534 master-0 kubenswrapper[7926]: I0216 21:21:59.029497 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 21:21:59.030244 master-0 kubenswrapper[7926]: I0216 21:21:59.030225 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-czn7h" Feb 16 21:21:59.043054 master-0 kubenswrapper[7926]: I0216 21:21:59.043006 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 16 21:21:59.114841 master-0 kubenswrapper[7926]: I0216 21:21:59.114796 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:21:59.115048 master-0 kubenswrapper[7926]: I0216 21:21:59.114869 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:21:59.115100 master-0 kubenswrapper[7926]: I0216 21:21:59.115034 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-var-lock\") pod \"installer-2-master-0\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:21:59.159503 master-0 kubenswrapper[7926]: I0216 21:21:59.159411 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_80420f2e7c3cdda71f7d0d6ccbe6f9f3/cluster-policy-controller/4.log" Feb 16 21:21:59.185545 master-0 kubenswrapper[7926]: I0216 21:21:59.185486 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:21:59.185545 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:21:59.185545 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:21:59.185545 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:21:59.185785 master-0 kubenswrapper[7926]: I0216 21:21:59.185599 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:21:59.216754 master-0 kubenswrapper[7926]: I0216 21:21:59.216641 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:21:59.216873 master-0 kubenswrapper[7926]: I0216 21:21:59.216818 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-var-lock\") pod \"installer-2-master-0\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:21:59.216995 master-0 kubenswrapper[7926]: I0216 21:21:59.216954 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:21:59.217086 master-0 kubenswrapper[7926]: I0216 21:21:59.216958 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-var-lock\") pod \"installer-2-master-0\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:21:59.217134 master-0 kubenswrapper[7926]: I0216 21:21:59.217018 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:21:59.237076 master-0 kubenswrapper[7926]: I0216 21:21:59.237012 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:21:59.352969 master-0 kubenswrapper[7926]: I0216 21:21:59.352800 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-tvzdw_6b6be6de-6fcc-4f57-b163-fe8f970a01a4/openshift-apiserver-operator/3.log" Feb 16 21:21:59.400251 master-0 kubenswrapper[7926]: I0216 21:21:59.400154 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:21:59.561755 master-0 kubenswrapper[7926]: I0216 21:21:59.561698 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-tvzdw_6b6be6de-6fcc-4f57-b163-fe8f970a01a4/openshift-apiserver-operator/4.log" Feb 16 21:21:59.753115 master-0 kubenswrapper[7926]: I0216 21:21:59.753019 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6bdb76b9b7-z46x6_d2501eec-47c8-47bc-b0c9-28d94c06075b/fix-audit-permissions/0.log" Feb 16 21:21:59.885207 master-0 kubenswrapper[7926]: I0216 21:21:59.885113 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 16 21:21:59.961359 master-0 kubenswrapper[7926]: I0216 21:21:59.961128 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6bdb76b9b7-z46x6_d2501eec-47c8-47bc-b0c9-28d94c06075b/openshift-apiserver/0.log" Feb 16 21:22:00.035679 master-0 kubenswrapper[7926]: I0216 21:22:00.035506 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"0cecc93e-bb0e-47da-903f-d0b63cce2b0d","Type":"ContainerStarted","Data":"5957534d0a5a6e1efe8a36af49bc53825aaeb991657eddb8f9392f7c762a0cd8"} Feb 16 21:22:00.158175 master-0 kubenswrapper[7926]: I0216 21:22:00.158100 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6bdb76b9b7-z46x6_d2501eec-47c8-47bc-b0c9-28d94c06075b/openshift-apiserver-check-endpoints/0.log" Feb 16 21:22:00.185446 master-0 kubenswrapper[7926]: I0216 21:22:00.185359 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:00.185446 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:00.185446 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:00.185446 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:00.185909 master-0 kubenswrapper[7926]: I0216 21:22:00.185471 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:00.352988 master-0 kubenswrapper[7926]: I0216 21:22:00.352829 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-8cllz_70d217a9-86b7-47b9-a7da-9ac920b9c7c2/etcd-operator/3.log" Feb 16 21:22:00.558261 master-0 kubenswrapper[7926]: I0216 21:22:00.558183 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-8cllz_70d217a9-86b7-47b9-a7da-9ac920b9c7c2/etcd-operator/4.log" Feb 16 21:22:00.760729 master-0 kubenswrapper[7926]: I0216 21:22:00.760622 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-588944557d-h7xl6_2e618c5c-52be-4b52-b426-b92555dee9de/catalog-operator/0.log" Feb 16 21:22:00.954056 master-0 kubenswrapper[7926]: I0216 21:22:00.953969 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29521260-fx98d_4cc1da27-6eaf-4177-b2d8-1546a9d94f90/collect-profiles/0.log" Feb 16 21:22:01.051608 master-0 kubenswrapper[7926]: I0216 21:22:01.051401 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"0cecc93e-bb0e-47da-903f-d0b63cce2b0d","Type":"ContainerStarted","Data":"8df27f209e925f58d0b4923f79cdb9bec01f45d38cbc22684566e7e609148bab"} Feb 16 21:22:01.083678 master-0 kubenswrapper[7926]: I0216 21:22:01.083566 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=2.08353862 podStartE2EDuration="2.08353862s" podCreationTimestamp="2026-02-16 21:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:22:01.079801098 +0000 UTC m=+1492.714701438" watchObservedRunningTime="2026-02-16 21:22:01.08353862 +0000 UTC m=+1492.718438940" Feb 16 21:22:01.151369 master-0 kubenswrapper[7926]: I0216 21:22:01.151320 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29521275-fl78b_ebeb6876-0438-4961-a62a-68b41a676f17/collect-profiles/0.log" Feb 16 21:22:01.185224 master-0 kubenswrapper[7926]: I0216 21:22:01.185149 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:01.185224 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:01.185224 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:01.185224 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:01.185607 master-0 kubenswrapper[7926]: I0216 21:22:01.185237 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:01.356945 master-0 kubenswrapper[7926]: I0216 21:22:01.356756 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b56bd877c-vlhvq_a4c9b781-14c0-469c-bb9e-0c3982a04520/olm-operator/0.log" Feb 16 21:22:01.756498 master-0 kubenswrapper[7926]: I0216 21:22:01.756430 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-9m94g_4b035e85-b2b0-4dee-bb86-3465fc4b98a8/package-server-manager/1.log" Feb 16 21:22:01.952589 master-0 kubenswrapper[7926]: I0216 21:22:01.952517 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-9m94g_4b035e85-b2b0-4dee-bb86-3465fc4b98a8/kube-rbac-proxy/0.log" Feb 16 21:22:02.022050 master-0 kubenswrapper[7926]: I0216 21:22:02.021911 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 16 21:22:02.022947 master-0 kubenswrapper[7926]: I0216 21:22:02.022920 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:02.024911 master-0 kubenswrapper[7926]: I0216 21:22:02.024876 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 21:22:02.025073 master-0 kubenswrapper[7926]: I0216 21:22:02.025026 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-fzgzx" Feb 16 21:22:02.034077 master-0 kubenswrapper[7926]: I0216 21:22:02.034022 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 16 21:22:02.061955 master-0 kubenswrapper[7926]: I0216 21:22:02.061880 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:02.062479 master-0 kubenswrapper[7926]: I0216 21:22:02.061958 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:02.062479 master-0 kubenswrapper[7926]: I0216 21:22:02.062013 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:02.154167 master-0 kubenswrapper[7926]: I0216 21:22:02.154117 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-9m94g_4b035e85-b2b0-4dee-bb86-3465fc4b98a8/package-server-manager/2.log" Feb 16 21:22:02.163778 master-0 kubenswrapper[7926]: I0216 21:22:02.163701 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:02.163952 master-0 kubenswrapper[7926]: I0216 21:22:02.163826 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:02.163952 master-0 kubenswrapper[7926]: I0216 21:22:02.163846 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:02.163952 master-0 kubenswrapper[7926]: I0216 21:22:02.163881 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:02.163952 master-0 kubenswrapper[7926]: I0216 21:22:02.163940 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:02.179722 master-0 kubenswrapper[7926]: I0216 21:22:02.179677 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:02.184842 master-0 kubenswrapper[7926]: I0216 21:22:02.184699 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:02.184842 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:02.184842 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:02.184842 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:02.184842 master-0 kubenswrapper[7926]: I0216 21:22:02.184745 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:02.351823 master-0 kubenswrapper[7926]: I0216 21:22:02.351699 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:02.352852 master-0 kubenswrapper[7926]: I0216 21:22:02.352810 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4_319dc882-e1f5-40f9-99f4-2bae028337e5/packageserver/1.log" Feb 16 21:22:02.558038 master-0 kubenswrapper[7926]: I0216 21:22:02.557981 7926 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4_319dc882-e1f5-40f9-99f4-2bae028337e5/packageserver/2.log" Feb 16 21:22:02.771704 master-0 kubenswrapper[7926]: I0216 21:22:02.770859 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Feb 16 21:22:02.778829 master-0 kubenswrapper[7926]: W0216 21:22:02.778784 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1f8a26db_5a90_4da9_9074_33256ef17100.slice/crio-84e6aa889c12b8f7b2d22b8b4cf46eee861623c6ee8d3fefb323875fd5efaa27 WatchSource:0}: Error finding container 84e6aa889c12b8f7b2d22b8b4cf46eee861623c6ee8d3fefb323875fd5efaa27: Status 404 returned error can't find the container with id 84e6aa889c12b8f7b2d22b8b4cf46eee861623c6ee8d3fefb323875fd5efaa27 Feb 16 21:22:03.065867 master-0 kubenswrapper[7926]: I0216 21:22:03.065741 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"1f8a26db-5a90-4da9-9074-33256ef17100","Type":"ContainerStarted","Data":"84e6aa889c12b8f7b2d22b8b4cf46eee861623c6ee8d3fefb323875fd5efaa27"} Feb 16 21:22:03.184783 master-0 kubenswrapper[7926]: I0216 21:22:03.184706 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:03.184783 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:03.184783 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:03.184783 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:03.184783 master-0 kubenswrapper[7926]: I0216 21:22:03.184794 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:04.075737 master-0 kubenswrapper[7926]: I0216 21:22:04.075640 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"1f8a26db-5a90-4da9-9074-33256ef17100","Type":"ContainerStarted","Data":"f3ca6870e03df61b2f0b4d124dc1734d96c0b5c71852fc980d271a8f385f1958"} Feb 16 21:22:04.186099 master-0 kubenswrapper[7926]: I0216 21:22:04.186002 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:04.186099 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:04.186099 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:04.186099 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:04.186099 master-0 kubenswrapper[7926]: I0216 21:22:04.186090 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:05.184142 master-0 kubenswrapper[7926]: I0216 21:22:05.184071 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:05.184142 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:05.184142 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:05.184142 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:05.184142 master-0 kubenswrapper[7926]: I0216 21:22:05.184127 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:06.185455 master-0 kubenswrapper[7926]: I0216 21:22:06.185301 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:06.185455 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:06.185455 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:06.185455 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:06.186834 master-0 kubenswrapper[7926]: I0216 21:22:06.185469 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:07.184835 master-0 kubenswrapper[7926]: I0216 21:22:07.184785 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:07.184835 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:07.184835 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:07.184835 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:07.185237 master-0 kubenswrapper[7926]: I0216 21:22:07.184860 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:07.739223 master-0 kubenswrapper[7926]: I0216 21:22:07.739172 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:22:07.740295 master-0 kubenswrapper[7926]: E0216 21:22:07.740225 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:22:08.185110 master-0 kubenswrapper[7926]: I0216 21:22:08.185012 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:08.185110 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:08.185110 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:08.185110 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:08.185557 master-0 kubenswrapper[7926]: I0216 21:22:08.185128 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:09.184783 master-0 kubenswrapper[7926]: I0216 21:22:09.184704 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:09.184783 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:09.184783 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:09.184783 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:09.185441 master-0 kubenswrapper[7926]: I0216 21:22:09.184802 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:10.185881 master-0 kubenswrapper[7926]: I0216 21:22:10.185793 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:10.185881 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:10.185881 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:10.185881 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:10.186412 master-0 kubenswrapper[7926]: I0216 21:22:10.185895 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:11.185162 master-0 kubenswrapper[7926]: I0216 21:22:11.185080 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:11.185162 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:11.185162 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:11.185162 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:11.185605 master-0 kubenswrapper[7926]: I0216 21:22:11.185174 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:12.185924 master-0 kubenswrapper[7926]: I0216 21:22:12.185827 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:12.185924 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:12.185924 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:12.185924 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:12.187286 master-0 kubenswrapper[7926]: I0216 21:22:12.185923 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:13.185677 master-0 kubenswrapper[7926]: I0216 21:22:13.185569 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:13.185677 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:13.185677 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:13.185677 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:13.186439 master-0 kubenswrapper[7926]: I0216 21:22:13.185713 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:14.185175 master-0 kubenswrapper[7926]: I0216 21:22:14.185062 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:14.185175 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:14.185175 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:14.185175 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:14.185175 master-0 kubenswrapper[7926]: I0216 21:22:14.185154 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:15.185457 master-0 kubenswrapper[7926]: I0216 21:22:15.185297 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:15.185457 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:15.185457 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:15.185457 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:15.185457 master-0 kubenswrapper[7926]: I0216 21:22:15.185437 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:16.184628 master-0 kubenswrapper[7926]: I0216 21:22:16.184528 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:16.184628 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:16.184628 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:16.184628 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:16.184628 master-0 kubenswrapper[7926]: I0216 21:22:16.184584 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:17.185231 master-0 kubenswrapper[7926]: I0216 21:22:17.185103 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:17.185231 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:17.185231 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:17.185231 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:17.185231 master-0 kubenswrapper[7926]: I0216 21:22:17.185222 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:18.186070 master-0 kubenswrapper[7926]: I0216 21:22:18.185959 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:18.186070 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:18.186070 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:18.186070 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:18.186070 master-0 kubenswrapper[7926]: I0216 21:22:18.186078 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:19.185301 master-0 kubenswrapper[7926]: I0216 21:22:19.185126 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:19.185301 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:19.185301 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:19.185301 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:19.185584 master-0 kubenswrapper[7926]: I0216 21:22:19.185323 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:20.185308 master-0 kubenswrapper[7926]: I0216 21:22:20.185216 7926 patch_prober.go:28] interesting pod/router-default-864ddd5f56-z4bnk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 21:22:20.185308 master-0 kubenswrapper[7926]: [-]has-synced failed: reason withheld Feb 16 21:22:20.185308 master-0 kubenswrapper[7926]: [+]process-running ok Feb 16 21:22:20.185308 master-0 kubenswrapper[7926]: healthz check failed Feb 16 21:22:20.186770 master-0 kubenswrapper[7926]: I0216 21:22:20.185308 7926 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 21:22:20.186770 master-0 kubenswrapper[7926]: I0216 21:22:20.185385 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:22:20.187358 master-0 kubenswrapper[7926]: I0216 21:22:20.187165 7926 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"7eb9d606c0ba4432a3c104c5bb2952f3efa3dee4e29f1c0d81a5b0db607ceac8"} pod="openshift-ingress/router-default-864ddd5f56-z4bnk" containerMessage="Container router failed startup probe, will be restarted" Feb 16 21:22:20.187589 master-0 kubenswrapper[7926]: I0216 21:22:20.187533 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" podUID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerName="router" containerID="cri-o://7eb9d606c0ba4432a3c104c5bb2952f3efa3dee4e29f1c0d81a5b0db607ceac8" gracePeriod=3600 Feb 16 21:22:21.458835 master-0 kubenswrapper[7926]: I0216 21:22:21.458755 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=19.458733823 podStartE2EDuration="19.458733823s" podCreationTimestamp="2026-02-16 21:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:22:04.098898672 +0000 UTC m=+1495.733798972" watchObservedRunningTime="2026-02-16 21:22:21.458733823 +0000 UTC m=+1513.093634133" Feb 16 21:22:21.474720 master-0 kubenswrapper[7926]: I0216 21:22:21.474623 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-k8h7h"] Feb 16 21:22:21.476773 master-0 kubenswrapper[7926]: I0216 21:22:21.476735 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.480350 master-0 kubenswrapper[7926]: I0216 21:22:21.480301 7926 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 16 21:22:21.480847 master-0 kubenswrapper[7926]: I0216 21:22:21.480821 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-wmp7w" Feb 16 21:22:21.543036 master-0 kubenswrapper[7926]: I0216 21:22:21.542963 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-ready\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.543274 master-0 kubenswrapper[7926]: I0216 21:22:21.543090 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.543274 master-0 kubenswrapper[7926]: I0216 21:22:21.543134 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgjlj\" (UniqueName: \"kubernetes.io/projected/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-kube-api-access-dgjlj\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.543345 master-0 kubenswrapper[7926]: I0216 21:22:21.543279 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.644476 master-0 kubenswrapper[7926]: I0216 21:22:21.644394 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-ready\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.644708 master-0 kubenswrapper[7926]: I0216 21:22:21.644548 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.644708 master-0 kubenswrapper[7926]: I0216 21:22:21.644601 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgjlj\" (UniqueName: \"kubernetes.io/projected/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-kube-api-access-dgjlj\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.644708 master-0 kubenswrapper[7926]: I0216 21:22:21.644635 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.644847 master-0 kubenswrapper[7926]: I0216 21:22:21.644769 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.644954 master-0 kubenswrapper[7926]: I0216 21:22:21.644914 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-ready\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.645324 master-0 kubenswrapper[7926]: I0216 21:22:21.645294 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.660229 master-0 kubenswrapper[7926]: I0216 21:22:21.660180 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgjlj\" (UniqueName: \"kubernetes.io/projected/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-kube-api-access-dgjlj\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:21.738965 master-0 kubenswrapper[7926]: I0216 21:22:21.738858 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:22:21.739140 master-0 kubenswrapper[7926]: E0216 21:22:21.739093 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:22:21.799162 master-0 kubenswrapper[7926]: I0216 21:22:21.799090 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:22.231186 master-0 kubenswrapper[7926]: I0216 21:22:22.231105 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" event={"ID":"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5","Type":"ContainerStarted","Data":"3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335"} Feb 16 21:22:22.231186 master-0 kubenswrapper[7926]: I0216 21:22:22.231162 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" event={"ID":"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5","Type":"ContainerStarted","Data":"95bb21eb958017bb1c79698309b67c3682dcd7011e9d5aacdb4e7366e93203b8"} Feb 16 21:22:22.231543 master-0 kubenswrapper[7926]: I0216 21:22:22.231356 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:22.247216 master-0 kubenswrapper[7926]: I0216 21:22:22.247114 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" podStartSLOduration=1.247061545 podStartE2EDuration="1.247061545s" podCreationTimestamp="2026-02-16 21:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:22:22.243601121 +0000 UTC m=+1513.878501461" watchObservedRunningTime="2026-02-16 21:22:22.247061545 +0000 UTC m=+1513.881961845" Feb 16 21:22:23.260015 master-0 kubenswrapper[7926]: I0216 21:22:23.259958 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:22:23.471372 master-0 kubenswrapper[7926]: I0216 21:22:23.471294 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-k8h7h"] Feb 16 21:22:25.252466 master-0 kubenswrapper[7926]: I0216 21:22:25.252246 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" podUID="3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335" gracePeriod=30 Feb 16 21:22:30.940142 master-0 kubenswrapper[7926]: I0216 21:22:30.940070 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-shtrw"] Feb 16 21:22:30.941193 master-0 kubenswrapper[7926]: I0216 21:22:30.941141 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:22:30.943589 master-0 kubenswrapper[7926]: I0216 21:22:30.943558 7926 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-rmw54" Feb 16 21:22:30.953765 master-0 kubenswrapper[7926]: I0216 21:22:30.953721 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-shtrw"] Feb 16 21:22:30.995369 master-0 kubenswrapper[7926]: I0216 21:22:30.995307 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22pl9\" (UniqueName: \"kubernetes.io/projected/8d56b871-a53a-4928-8967-a33ea9dcec2a-kube-api-access-22pl9\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:22:30.995608 master-0 kubenswrapper[7926]: I0216 21:22:30.995388 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:22:31.096756 master-0 kubenswrapper[7926]: I0216 21:22:31.096695 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22pl9\" (UniqueName: \"kubernetes.io/projected/8d56b871-a53a-4928-8967-a33ea9dcec2a-kube-api-access-22pl9\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:22:31.096970 master-0 kubenswrapper[7926]: I0216 21:22:31.096791 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:22:31.100319 master-0 kubenswrapper[7926]: I0216 21:22:31.100281 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:22:31.110761 master-0 kubenswrapper[7926]: I0216 21:22:31.110716 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22pl9\" (UniqueName: \"kubernetes.io/projected/8d56b871-a53a-4928-8967-a33ea9dcec2a-kube-api-access-22pl9\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:22:31.298929 master-0 kubenswrapper[7926]: I0216 21:22:31.298880 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:22:31.759187 master-0 kubenswrapper[7926]: I0216 21:22:31.759097 7926 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-shtrw"] Feb 16 21:22:31.763191 master-0 kubenswrapper[7926]: W0216 21:22:31.763119 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d56b871_a53a_4928_8967_a33ea9dcec2a.slice/crio-017b12ba663cae17ffc7b3e8cac380511c7277e4c495d7f5a091fa50febd2724 WatchSource:0}: Error finding container 017b12ba663cae17ffc7b3e8cac380511c7277e4c495d7f5a091fa50febd2724: Status 404 returned error can't find the container with id 017b12ba663cae17ffc7b3e8cac380511c7277e4c495d7f5a091fa50febd2724 Feb 16 21:22:31.802130 master-0 kubenswrapper[7926]: E0216 21:22:31.802065 7926 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 21:22:31.803808 master-0 kubenswrapper[7926]: E0216 21:22:31.803732 7926 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 21:22:31.808166 master-0 kubenswrapper[7926]: E0216 21:22:31.808085 7926 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 21:22:31.808166 master-0 kubenswrapper[7926]: E0216 21:22:31.808132 7926 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" podUID="3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" containerName="kube-multus-additional-cni-plugins" Feb 16 21:22:32.304877 master-0 kubenswrapper[7926]: I0216 21:22:32.304823 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" event={"ID":"8d56b871-a53a-4928-8967-a33ea9dcec2a","Type":"ContainerStarted","Data":"7d4587438925e95ef133aa70ffd5cc5c95285a91547249dafb4e5e010a318487"} Feb 16 21:22:32.306347 master-0 kubenswrapper[7926]: I0216 21:22:32.304889 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" event={"ID":"8d56b871-a53a-4928-8967-a33ea9dcec2a","Type":"ContainerStarted","Data":"095da5d3f3a8d574558c5e1ced05aba1aaa62dc2ea675395d13a40ca2c30a60c"} Feb 16 21:22:32.306347 master-0 kubenswrapper[7926]: I0216 21:22:32.304927 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" event={"ID":"8d56b871-a53a-4928-8967-a33ea9dcec2a","Type":"ContainerStarted","Data":"017b12ba663cae17ffc7b3e8cac380511c7277e4c495d7f5a091fa50febd2724"} Feb 16 21:22:32.338768 master-0 kubenswrapper[7926]: I0216 21:22:32.338669 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" podStartSLOduration=2.3386261680000002 podStartE2EDuration="2.338626168s" podCreationTimestamp="2026-02-16 21:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:22:32.333472858 +0000 UTC m=+1523.968373198" watchObservedRunningTime="2026-02-16 21:22:32.338626168 +0000 UTC m=+1523.973526478" Feb 16 21:22:32.377785 master-0 kubenswrapper[7926]: I0216 21:22:32.373805 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-z46jt"] Feb 16 21:22:32.377785 master-0 kubenswrapper[7926]: I0216 21:22:32.374017 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" podUID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" containerName="multus-admission-controller" containerID="cri-o://b6f9bd149e55332060a93dd1c773c869219679c9d52274540dd91f495e731934" gracePeriod=30 Feb 16 21:22:32.377785 master-0 kubenswrapper[7926]: I0216 21:22:32.374354 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" podUID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" containerName="kube-rbac-proxy" containerID="cri-o://7e2db6d71a3ac7629c39a027759be84deb42e9801284908e0ecc941bc1381254" gracePeriod=30 Feb 16 21:22:32.738231 master-0 kubenswrapper[7926]: I0216 21:22:32.738159 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:22:32.738459 master-0 kubenswrapper[7926]: E0216 21:22:32.738428 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:22:33.009751 master-0 kubenswrapper[7926]: I0216 21:22:33.009696 7926 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 21:22:33.009992 master-0 kubenswrapper[7926]: I0216 21:22:33.009948 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" containerID="cri-o://6dfa6b8d2b84acd49a7559619cbb2034fe2294937bd8d4e0f86679d02bd2078a" gracePeriod=30 Feb 16 21:22:33.010084 master-0 kubenswrapper[7926]: I0216 21:22:33.010027 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" containerID="cri-o://6ae1597534c852a1aae5585dadba4c16b6d817d6984c35ca98940b0dfe1fcd77" gracePeriod=30 Feb 16 21:22:33.011449 master-0 kubenswrapper[7926]: I0216 21:22:33.011343 7926 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:22:33.011626 master-0 kubenswrapper[7926]: E0216 21:22:33.011598 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.011626 master-0 kubenswrapper[7926]: I0216 21:22:33.011618 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.011626 master-0 kubenswrapper[7926]: E0216 21:22:33.011628 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: I0216 21:22:33.011636 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: E0216 21:22:33.011644 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: I0216 21:22:33.011665 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: E0216 21:22:33.011672 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: I0216 21:22:33.011677 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: E0216 21:22:33.011688 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: I0216 21:22:33.011695 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: E0216 21:22:33.011719 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: I0216 21:22:33.011725 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: E0216 21:22:33.011733 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: I0216 21:22:33.011739 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: E0216 21:22:33.011750 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: I0216 21:22:33.011756 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: E0216 21:22:33.011764 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.011797 master-0 kubenswrapper[7926]: I0216 21:22:33.011771 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.011948 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.011961 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.011970 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.011978 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.011990 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.011999 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012005 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012018 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012028 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: E0216 21:22:33.012124 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012131 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: E0216 21:22:33.012142 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012148 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: E0216 21:22:33.012213 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012221 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: E0216 21:22:33.012231 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012237 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012348 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012364 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012373 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012392 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.012401 master-0 kubenswrapper[7926]: I0216 21:22:33.012398 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:22:33.013454 master-0 kubenswrapper[7926]: E0216 21:22:33.012522 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.013454 master-0 kubenswrapper[7926]: I0216 21:22:33.012530 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:22:33.013454 master-0 kubenswrapper[7926]: I0216 21:22:33.013419 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:33.124766 master-0 kubenswrapper[7926]: I0216 21:22:33.124624 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:33.124766 master-0 kubenswrapper[7926]: I0216 21:22:33.124792 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:33.129986 master-0 kubenswrapper[7926]: I0216 21:22:33.129935 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:22:33.201621 master-0 kubenswrapper[7926]: I0216 21:22:33.201451 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:22:33.226165 master-0 kubenswrapper[7926]: I0216 21:22:33.226108 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:33.226326 master-0 kubenswrapper[7926]: I0216 21:22:33.226199 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:33.226326 master-0 kubenswrapper[7926]: I0216 21:22:33.226288 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:33.226398 master-0 kubenswrapper[7926]: I0216 21:22:33.226245 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:33.227173 master-0 kubenswrapper[7926]: I0216 21:22:33.227142 7926 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="dfb5c6a6-2fb2-44d3-8744-2f73c83e1292" Feb 16 21:22:33.312585 master-0 kubenswrapper[7926]: I0216 21:22:33.312452 7926 generic.go:334] "Generic (PLEG): container finished" podID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" containerID="7e2db6d71a3ac7629c39a027759be84deb42e9801284908e0ecc941bc1381254" exitCode=0 Feb 16 21:22:33.312585 master-0 kubenswrapper[7926]: I0216 21:22:33.312529 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" event={"ID":"b27de289-c0f9-47ff-aac6-15b7bc1b178a","Type":"ContainerDied","Data":"7e2db6d71a3ac7629c39a027759be84deb42e9801284908e0ecc941bc1381254"} Feb 16 21:22:33.316005 master-0 kubenswrapper[7926]: I0216 21:22:33.315967 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="6ae1597534c852a1aae5585dadba4c16b6d817d6984c35ca98940b0dfe1fcd77" exitCode=0 Feb 16 21:22:33.316084 master-0 kubenswrapper[7926]: I0216 21:22:33.315996 7926 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="6dfa6b8d2b84acd49a7559619cbb2034fe2294937bd8d4e0f86679d02bd2078a" exitCode=0 Feb 16 21:22:33.316084 master-0 kubenswrapper[7926]: I0216 21:22:33.316036 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 21:22:33.316084 master-0 kubenswrapper[7926]: I0216 21:22:33.316076 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76dbaddee4470107b39590128f61476392182af8f7359d5ef8d2efc6c99ae59e" Feb 16 21:22:33.316177 master-0 kubenswrapper[7926]: I0216 21:22:33.316097 7926 scope.go:117] "RemoveContainer" containerID="a591b9fa8d74ad75ec2421d6c1738c199e947e0e55c24abea8bf7fc61016c406" Feb 16 21:22:33.318106 master-0 kubenswrapper[7926]: I0216 21:22:33.318068 7926 generic.go:334] "Generic (PLEG): container finished" podID="0cecc93e-bb0e-47da-903f-d0b63cce2b0d" containerID="8df27f209e925f58d0b4923f79cdb9bec01f45d38cbc22684566e7e609148bab" exitCode=0 Feb 16 21:22:33.318743 master-0 kubenswrapper[7926]: I0216 21:22:33.318537 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"0cecc93e-bb0e-47da-903f-d0b63cce2b0d","Type":"ContainerDied","Data":"8df27f209e925f58d0b4923f79cdb9bec01f45d38cbc22684566e7e609148bab"} Feb 16 21:22:33.327385 master-0 kubenswrapper[7926]: I0216 21:22:33.327338 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 21:22:33.327510 master-0 kubenswrapper[7926]: I0216 21:22:33.327473 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 21:22:33.327510 master-0 kubenswrapper[7926]: I0216 21:22:33.327497 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 21:22:33.327603 master-0 kubenswrapper[7926]: I0216 21:22:33.327578 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 21:22:33.327641 master-0 kubenswrapper[7926]: I0216 21:22:33.327591 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config" (OuterVolumeSpecName: "config") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:33.327641 master-0 kubenswrapper[7926]: I0216 21:22:33.327617 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 21:22:33.327720 master-0 kubenswrapper[7926]: I0216 21:22:33.327642 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs" (OuterVolumeSpecName: "logs") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:33.327720 master-0 kubenswrapper[7926]: I0216 21:22:33.327638 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:33.327779 master-0 kubenswrapper[7926]: I0216 21:22:33.327721 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets" (OuterVolumeSpecName: "secrets") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:33.327820 master-0 kubenswrapper[7926]: I0216 21:22:33.327781 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:33.327984 master-0 kubenswrapper[7926]: I0216 21:22:33.327953 7926 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:33.327984 master-0 kubenswrapper[7926]: I0216 21:22:33.327980 7926 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:33.328055 master-0 kubenswrapper[7926]: I0216 21:22:33.327994 7926 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:33.328055 master-0 kubenswrapper[7926]: I0216 21:22:33.328006 7926 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:33.328055 master-0 kubenswrapper[7926]: I0216 21:22:33.328018 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:33.331953 master-0 kubenswrapper[7926]: I0216 21:22:33.331921 7926 scope.go:117] "RemoveContainer" containerID="004bfc046616ade5acce3345f914946a2b1075ac66e815294a04a1ccd9e0b9a2" Feb 16 21:22:33.429183 master-0 kubenswrapper[7926]: I0216 21:22:33.428035 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:34.342100 master-0 kubenswrapper[7926]: I0216 21:22:34.342069 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"72ee9e35c766aea904898f2e9f2ffaca","Type":"ContainerStarted","Data":"0a662b88d01e2a6c7840550eedccdbaad4f0955066a41fc813a25bc7970213e5"} Feb 16 21:22:34.342577 master-0 kubenswrapper[7926]: I0216 21:22:34.342557 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"72ee9e35c766aea904898f2e9f2ffaca","Type":"ContainerStarted","Data":"bd383c7f3493b77aa39a71f0c59c6ca2af1cb84a3dcd17da7deffd0c9f13279e"} Feb 16 21:22:34.342686 master-0 kubenswrapper[7926]: I0216 21:22:34.342670 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"72ee9e35c766aea904898f2e9f2ffaca","Type":"ContainerStarted","Data":"93e4248b433133e3c151d7b3b51df468e545cf503f72fd69fa418801f9123776"} Feb 16 21:22:34.342777 master-0 kubenswrapper[7926]: I0216 21:22:34.342762 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"72ee9e35c766aea904898f2e9f2ffaca","Type":"ContainerStarted","Data":"bdfde90f893f521a930ff809d7a19e8600359a70b3e19bbbef0735c23b65d26d"} Feb 16 21:22:34.342863 master-0 kubenswrapper[7926]: I0216 21:22:34.342847 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"72ee9e35c766aea904898f2e9f2ffaca","Type":"ContainerStarted","Data":"18445cef4b6797ad657a965be9f13f99564dcc29dc7e932a9b359ffe1a1aa1ce"} Feb 16 21:22:34.369911 master-0 kubenswrapper[7926]: I0216 21:22:34.369847 7926 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=1.369831193 podStartE2EDuration="1.369831193s" podCreationTimestamp="2026-02-16 21:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:22:34.362861474 +0000 UTC m=+1525.997761794" watchObservedRunningTime="2026-02-16 21:22:34.369831193 +0000 UTC m=+1526.004731493" Feb 16 21:22:34.699887 master-0 kubenswrapper[7926]: I0216 21:22:34.699839 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:22:34.745456 master-0 kubenswrapper[7926]: I0216 21:22:34.745416 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kubelet-dir\") pod \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " Feb 16 21:22:34.745721 master-0 kubenswrapper[7926]: I0216 21:22:34.745526 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kube-api-access\") pod \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " Feb 16 21:22:34.745721 master-0 kubenswrapper[7926]: I0216 21:22:34.745592 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-var-lock\") pod \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\" (UID: \"0cecc93e-bb0e-47da-903f-d0b63cce2b0d\") " Feb 16 21:22:34.745894 master-0 kubenswrapper[7926]: I0216 21:22:34.745857 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0cecc93e-bb0e-47da-903f-d0b63cce2b0d" (UID: "0cecc93e-bb0e-47da-903f-d0b63cce2b0d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:34.745988 master-0 kubenswrapper[7926]: I0216 21:22:34.745912 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-var-lock" (OuterVolumeSpecName: "var-lock") pod "0cecc93e-bb0e-47da-903f-d0b63cce2b0d" (UID: "0cecc93e-bb0e-47da-903f-d0b63cce2b0d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:34.748382 master-0 kubenswrapper[7926]: I0216 21:22:34.748329 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0cecc93e-bb0e-47da-903f-d0b63cce2b0d" (UID: "0cecc93e-bb0e-47da-903f-d0b63cce2b0d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:22:34.748786 master-0 kubenswrapper[7926]: I0216 21:22:34.748728 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" path="/var/lib/kubelet/pods/80420f2e7c3cdda71f7d0d6ccbe6f9f3/volumes" Feb 16 21:22:34.749246 master-0 kubenswrapper[7926]: I0216 21:22:34.749224 7926 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Feb 16 21:22:34.781568 master-0 kubenswrapper[7926]: I0216 21:22:34.780549 7926 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 21:22:34.781568 master-0 kubenswrapper[7926]: I0216 21:22:34.780603 7926 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="dfb5c6a6-2fb2-44d3-8744-2f73c83e1292" Feb 16 21:22:34.784550 master-0 kubenswrapper[7926]: I0216 21:22:34.784514 7926 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 21:22:34.784624 master-0 kubenswrapper[7926]: I0216 21:22:34.784547 7926 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="dfb5c6a6-2fb2-44d3-8744-2f73c83e1292" Feb 16 21:22:34.847348 master-0 kubenswrapper[7926]: I0216 21:22:34.847277 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:34.847348 master-0 kubenswrapper[7926]: I0216 21:22:34.847325 7926 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:34.847348 master-0 kubenswrapper[7926]: I0216 21:22:34.847339 7926 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cecc93e-bb0e-47da-903f-d0b63cce2b0d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:35.350509 master-0 kubenswrapper[7926]: I0216 21:22:35.350419 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"0cecc93e-bb0e-47da-903f-d0b63cce2b0d","Type":"ContainerDied","Data":"5957534d0a5a6e1efe8a36af49bc53825aaeb991657eddb8f9392f7c762a0cd8"} Feb 16 21:22:35.350509 master-0 kubenswrapper[7926]: I0216 21:22:35.350462 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:22:35.350509 master-0 kubenswrapper[7926]: I0216 21:22:35.350476 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5957534d0a5a6e1efe8a36af49bc53825aaeb991657eddb8f9392f7c762a0cd8" Feb 16 21:22:40.993564 master-0 kubenswrapper[7926]: I0216 21:22:40.993443 7926 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 21:22:40.994553 master-0 kubenswrapper[7926]: E0216 21:22:40.993843 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cecc93e-bb0e-47da-903f-d0b63cce2b0d" containerName="installer" Feb 16 21:22:40.994553 master-0 kubenswrapper[7926]: I0216 21:22:40.993860 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cecc93e-bb0e-47da-903f-d0b63cce2b0d" containerName="installer" Feb 16 21:22:40.994553 master-0 kubenswrapper[7926]: I0216 21:22:40.994045 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cecc93e-bb0e-47da-903f-d0b63cce2b0d" containerName="installer" Feb 16 21:22:40.994806 master-0 kubenswrapper[7926]: I0216 21:22:40.994583 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:40.994806 master-0 kubenswrapper[7926]: I0216 21:22:40.994621 7926 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 16 21:22:40.996232 master-0 kubenswrapper[7926]: I0216 21:22:40.996194 7926 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 21:22:40.996482 master-0 kubenswrapper[7926]: I0216 21:22:40.996373 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5e7c38ffeebe9ecd58ceaa66f0e5d878c7328cfe4f821ef677aab62956457cf2" gracePeriod=15 Feb 16 21:22:40.996612 master-0 kubenswrapper[7926]: E0216 21:22:40.996594 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 21:22:40.996667 master-0 kubenswrapper[7926]: I0216 21:22:40.996632 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 21:22:40.996729 master-0 kubenswrapper[7926]: E0216 21:22:40.996706 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 21:22:40.996729 master-0 kubenswrapper[7926]: I0216 21:22:40.996726 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 21:22:40.996799 master-0 kubenswrapper[7926]: E0216 21:22:40.996738 7926 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 21:22:40.996799 master-0 kubenswrapper[7926]: I0216 21:22:40.996747 7926 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 21:22:40.996960 master-0 kubenswrapper[7926]: I0216 21:22:40.996210 7926 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" containerID="cri-o://917b8b89b52fc1ea526b8dd828bd51e4ae2f231263633fb2c2bfa2d5e4419132" gracePeriod=15 Feb 16 21:22:40.997099 master-0 kubenswrapper[7926]: I0216 21:22:40.996975 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 21:22:40.997231 master-0 kubenswrapper[7926]: I0216 21:22:40.997219 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 21:22:40.997296 master-0 kubenswrapper[7926]: I0216 21:22:40.997286 7926 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 21:22:41.000397 master-0 kubenswrapper[7926]: I0216 21:22:41.000376 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.036012 master-0 kubenswrapper[7926]: I0216 21:22:41.035958 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.036215 master-0 kubenswrapper[7926]: I0216 21:22:41.036018 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.036215 master-0 kubenswrapper[7926]: I0216 21:22:41.036054 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.036215 master-0 kubenswrapper[7926]: I0216 21:22:41.036085 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.036215 master-0 kubenswrapper[7926]: I0216 21:22:41.036119 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.036215 master-0 kubenswrapper[7926]: I0216 21:22:41.036153 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.036215 master-0 kubenswrapper[7926]: I0216 21:22:41.036180 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.036215 master-0 kubenswrapper[7926]: I0216 21:22:41.036204 7926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.039125 master-0 kubenswrapper[7926]: I0216 21:22:41.039083 7926 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 21:22:41.044276 master-0 kubenswrapper[7926]: E0216 21:22:41.044158 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.137293 master-0 kubenswrapper[7926]: I0216 21:22:41.137213 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.137293 master-0 kubenswrapper[7926]: I0216 21:22:41.137289 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.137469 master-0 kubenswrapper[7926]: I0216 21:22:41.137321 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.137469 master-0 kubenswrapper[7926]: I0216 21:22:41.137336 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.137469 master-0 kubenswrapper[7926]: I0216 21:22:41.137380 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.137469 master-0 kubenswrapper[7926]: I0216 21:22:41.137382 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.137469 master-0 kubenswrapper[7926]: I0216 21:22:41.137400 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.137469 master-0 kubenswrapper[7926]: I0216 21:22:41.137431 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.137469 master-0 kubenswrapper[7926]: I0216 21:22:41.137454 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.137800 master-0 kubenswrapper[7926]: I0216 21:22:41.137497 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.137800 master-0 kubenswrapper[7926]: I0216 21:22:41.137530 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.137800 master-0 kubenswrapper[7926]: I0216 21:22:41.137556 7926 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.137800 master-0 kubenswrapper[7926]: I0216 21:22:41.137673 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.137800 master-0 kubenswrapper[7926]: I0216 21:22:41.137705 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.138009 master-0 kubenswrapper[7926]: I0216 21:22:41.137796 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.138009 master-0 kubenswrapper[7926]: I0216 21:22:41.137855 7926 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.298296 master-0 kubenswrapper[7926]: I0216 21:22:41.298225 7926 patch_prober.go:28] interesting pod/bootstrap-kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Feb 16 21:22:41.298509 master-0 kubenswrapper[7926]: I0216 21:22:41.298302 7926 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:22:41.299182 master-0 kubenswrapper[7926]: E0216 21:22:41.299057 7926 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Feb 16 21:22:41.299182 master-0 kubenswrapper[7926]: &Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894d702fd04701b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 21:22:41.299182 master-0 kubenswrapper[7926]: body: Feb 16 21:22:41.299182 master-0 kubenswrapper[7926]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 21:22:41.298280475 +0000 UTC m=+1532.933180785,LastTimestamp:2026-02-16 21:22:41.298280475 +0000 UTC m=+1532.933180785,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 16 21:22:41.299182 master-0 kubenswrapper[7926]: > Feb 16 21:22:41.337932 master-0 kubenswrapper[7926]: I0216 21:22:41.337853 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:22:41.345938 master-0 kubenswrapper[7926]: I0216 21:22:41.345876 7926 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:41.393826 master-0 kubenswrapper[7926]: W0216 21:22:41.393748 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b26dae9694224e04f0cdc3841408c63.slice/crio-6484af368276a809cf9fc113e39e94b58a7e749f404b7ad55bc0ffd6db6821c5 WatchSource:0}: Error finding container 6484af368276a809cf9fc113e39e94b58a7e749f404b7ad55bc0ffd6db6821c5: Status 404 returned error can't find the container with id 6484af368276a809cf9fc113e39e94b58a7e749f404b7ad55bc0ffd6db6821c5 Feb 16 21:22:41.395728 master-0 kubenswrapper[7926]: W0216 21:22:41.395451 7926 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode300ec3a145c1339a627607b3c84b99d.slice/crio-62b7693910cb02952d8855d0ec6b5ec30d5524abd40344dea37279d475bce731 WatchSource:0}: Error finding container 62b7693910cb02952d8855d0ec6b5ec30d5524abd40344dea37279d475bce731: Status 404 returned error can't find the container with id 62b7693910cb02952d8855d0ec6b5ec30d5524abd40344dea37279d475bce731 Feb 16 21:22:41.412281 master-0 kubenswrapper[7926]: I0216 21:22:41.412191 7926 generic.go:334] "Generic (PLEG): container finished" podID="1f8a26db-5a90-4da9-9074-33256ef17100" containerID="f3ca6870e03df61b2f0b4d124dc1734d96c0b5c71852fc980d271a8f385f1958" exitCode=0 Feb 16 21:22:41.412455 master-0 kubenswrapper[7926]: I0216 21:22:41.412306 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"1f8a26db-5a90-4da9-9074-33256ef17100","Type":"ContainerDied","Data":"f3ca6870e03df61b2f0b4d124dc1734d96c0b5c71852fc980d271a8f385f1958"} Feb 16 21:22:41.413786 master-0 kubenswrapper[7926]: I0216 21:22:41.413718 7926 status_manager.go:851] "Failed to get status for pod" podUID="1f8a26db-5a90-4da9-9074-33256ef17100" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:22:41.414493 master-0 kubenswrapper[7926]: I0216 21:22:41.414426 7926 status_manager.go:851] "Failed to get status for pod" podUID="5b26dae9694224e04f0cdc3841408c63" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:22:41.416345 master-0 kubenswrapper[7926]: I0216 21:22:41.416273 7926 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="5e7c38ffeebe9ecd58ceaa66f0e5d878c7328cfe4f821ef677aab62956457cf2" exitCode=0 Feb 16 21:22:41.801373 master-0 kubenswrapper[7926]: E0216 21:22:41.801174 7926 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 21:22:41.802986 master-0 kubenswrapper[7926]: E0216 21:22:41.802921 7926 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 21:22:41.804534 master-0 kubenswrapper[7926]: E0216 21:22:41.804484 7926 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 21:22:41.804633 master-0 kubenswrapper[7926]: E0216 21:22:41.804539 7926 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" podUID="3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" containerName="kube-multus-additional-cni-plugins" Feb 16 21:22:42.427522 master-0 kubenswrapper[7926]: I0216 21:22:42.427405 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5b26dae9694224e04f0cdc3841408c63","Type":"ContainerStarted","Data":"1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89"} Feb 16 21:22:42.427522 master-0 kubenswrapper[7926]: I0216 21:22:42.427526 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5b26dae9694224e04f0cdc3841408c63","Type":"ContainerStarted","Data":"6484af368276a809cf9fc113e39e94b58a7e749f404b7ad55bc0ffd6db6821c5"} Feb 16 21:22:42.429688 master-0 kubenswrapper[7926]: I0216 21:22:42.429347 7926 status_manager.go:851] "Failed to get status for pod" podUID="5b26dae9694224e04f0cdc3841408c63" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:22:42.430791 master-0 kubenswrapper[7926]: I0216 21:22:42.430698 7926 status_manager.go:851] "Failed to get status for pod" podUID="1f8a26db-5a90-4da9-9074-33256ef17100" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:22:42.431687 master-0 kubenswrapper[7926]: I0216 21:22:42.431573 7926 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891" exitCode=0 Feb 16 21:22:42.431961 master-0 kubenswrapper[7926]: I0216 21:22:42.431703 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerDied","Data":"8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891"} Feb 16 21:22:42.431961 master-0 kubenswrapper[7926]: I0216 21:22:42.431780 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"62b7693910cb02952d8855d0ec6b5ec30d5524abd40344dea37279d475bce731"} Feb 16 21:22:42.435746 master-0 kubenswrapper[7926]: I0216 21:22:42.435644 7926 status_manager.go:851] "Failed to get status for pod" podUID="1f8a26db-5a90-4da9-9074-33256ef17100" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:22:42.435947 master-0 kubenswrapper[7926]: E0216 21:22:42.435815 7926 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:42.436878 master-0 kubenswrapper[7926]: I0216 21:22:42.436642 7926 status_manager.go:851] "Failed to get status for pod" podUID="5b26dae9694224e04f0cdc3841408c63" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:22:42.828077 master-0 kubenswrapper[7926]: I0216 21:22:42.828041 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:42.829371 master-0 kubenswrapper[7926]: I0216 21:22:42.829339 7926 status_manager.go:851] "Failed to get status for pod" podUID="5b26dae9694224e04f0cdc3841408c63" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:22:42.830126 master-0 kubenswrapper[7926]: I0216 21:22:42.830078 7926 status_manager.go:851] "Failed to get status for pod" podUID="1f8a26db-5a90-4da9-9074-33256ef17100" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:22:42.864348 master-0 kubenswrapper[7926]: I0216 21:22:42.864272 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir\") pod \"1f8a26db-5a90-4da9-9074-33256ef17100\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " Feb 16 21:22:42.864532 master-0 kubenswrapper[7926]: I0216 21:22:42.864431 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock\") pod \"1f8a26db-5a90-4da9-9074-33256ef17100\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " Feb 16 21:22:42.864532 master-0 kubenswrapper[7926]: I0216 21:22:42.864516 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"1f8a26db-5a90-4da9-9074-33256ef17100\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " Feb 16 21:22:42.864752 master-0 kubenswrapper[7926]: I0216 21:22:42.864716 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1f8a26db-5a90-4da9-9074-33256ef17100" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:42.864929 master-0 kubenswrapper[7926]: I0216 21:22:42.864752 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock" (OuterVolumeSpecName: "var-lock") pod "1f8a26db-5a90-4da9-9074-33256ef17100" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:42.865233 master-0 kubenswrapper[7926]: I0216 21:22:42.865215 7926 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:42.865297 master-0 kubenswrapper[7926]: I0216 21:22:42.865285 7926 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:42.867823 master-0 kubenswrapper[7926]: I0216 21:22:42.867707 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1f8a26db-5a90-4da9-9074-33256ef17100" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:22:42.966241 master-0 kubenswrapper[7926]: I0216 21:22:42.966200 7926 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:43.431351 master-0 kubenswrapper[7926]: I0216 21:22:43.430886 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:43.431351 master-0 kubenswrapper[7926]: I0216 21:22:43.431052 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:43.431351 master-0 kubenswrapper[7926]: I0216 21:22:43.431245 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:43.431351 master-0 kubenswrapper[7926]: I0216 21:22:43.431261 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:43.436818 master-0 kubenswrapper[7926]: I0216 21:22:43.436789 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:43.440137 master-0 kubenswrapper[7926]: I0216 21:22:43.440112 7926 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:43.442146 master-0 kubenswrapper[7926]: I0216 21:22:43.442112 7926 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="917b8b89b52fc1ea526b8dd828bd51e4ae2f231263633fb2c2bfa2d5e4419132" exitCode=0 Feb 16 21:22:43.446007 master-0 kubenswrapper[7926]: I0216 21:22:43.445983 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"1f8a26db-5a90-4da9-9074-33256ef17100","Type":"ContainerDied","Data":"84e6aa889c12b8f7b2d22b8b4cf46eee861623c6ee8d3fefb323875fd5efaa27"} Feb 16 21:22:43.446083 master-0 kubenswrapper[7926]: I0216 21:22:43.446011 7926 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84e6aa889c12b8f7b2d22b8b4cf46eee861623c6ee8d3fefb323875fd5efaa27" Feb 16 21:22:43.446083 master-0 kubenswrapper[7926]: I0216 21:22:43.446054 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:22:43.456488 master-0 kubenswrapper[7926]: I0216 21:22:43.456337 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9"} Feb 16 21:22:43.456488 master-0 kubenswrapper[7926]: I0216 21:22:43.456383 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c"} Feb 16 21:22:43.456488 master-0 kubenswrapper[7926]: I0216 21:22:43.456398 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e"} Feb 16 21:22:43.460175 master-0 kubenswrapper[7926]: I0216 21:22:43.460137 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:43.460688 master-0 kubenswrapper[7926]: I0216 21:22:43.460607 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:22:43.795208 master-0 kubenswrapper[7926]: I0216 21:22:43.795165 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 21:22:43.893102 master-0 kubenswrapper[7926]: I0216 21:22:43.893032 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 21:22:43.893332 master-0 kubenswrapper[7926]: I0216 21:22:43.893128 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 21:22:43.893332 master-0 kubenswrapper[7926]: I0216 21:22:43.893171 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 21:22:43.893332 master-0 kubenswrapper[7926]: I0216 21:22:43.893181 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets" (OuterVolumeSpecName: "secrets") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:43.893332 master-0 kubenswrapper[7926]: I0216 21:22:43.893193 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 21:22:43.893332 master-0 kubenswrapper[7926]: I0216 21:22:43.893246 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:43.893332 master-0 kubenswrapper[7926]: I0216 21:22:43.893285 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs" (OuterVolumeSpecName: "logs") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:43.893332 master-0 kubenswrapper[7926]: I0216 21:22:43.893306 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:43.893643 master-0 kubenswrapper[7926]: I0216 21:22:43.893368 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 21:22:43.893643 master-0 kubenswrapper[7926]: I0216 21:22:43.893396 7926 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"5d1e91e5a1fed5cf7076a92d2830d36f\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " Feb 16 21:22:43.893918 master-0 kubenswrapper[7926]: I0216 21:22:43.893890 7926 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:43.893918 master-0 kubenswrapper[7926]: I0216 21:22:43.893913 7926 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:43.894017 master-0 kubenswrapper[7926]: I0216 21:22:43.893924 7926 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:43.894017 master-0 kubenswrapper[7926]: I0216 21:22:43.893934 7926 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:43.894017 master-0 kubenswrapper[7926]: I0216 21:22:43.893961 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config" (OuterVolumeSpecName: "config") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:43.894017 master-0 kubenswrapper[7926]: I0216 21:22:43.893981 7926 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "5d1e91e5a1fed5cf7076a92d2830d36f" (UID: "5d1e91e5a1fed5cf7076a92d2830d36f"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:22:43.997749 master-0 kubenswrapper[7926]: I0216 21:22:43.995942 7926 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:43.997749 master-0 kubenswrapper[7926]: I0216 21:22:43.995991 7926 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:22:44.463599 master-0 kubenswrapper[7926]: I0216 21:22:44.463460 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"43047bae0f2dd351891e082f8932168325d435e7cb25fa3bae528c469bde358f"} Feb 16 21:22:44.463599 master-0 kubenswrapper[7926]: I0216 21:22:44.463506 7926 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0"} Feb 16 21:22:44.464334 master-0 kubenswrapper[7926]: I0216 21:22:44.464048 7926 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:22:44.470792 master-0 kubenswrapper[7926]: I0216 21:22:44.466372 7926 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 21:22:44.470792 master-0 kubenswrapper[7926]: I0216 21:22:44.466477 7926 scope.go:117] "RemoveContainer" containerID="5e7c38ffeebe9ecd58ceaa66f0e5d878c7328cfe4f821ef677aab62956457cf2" Feb 16 21:22:44.505909 master-0 kubenswrapper[7926]: I0216 21:22:44.505855 7926 scope.go:117] "RemoveContainer" containerID="917b8b89b52fc1ea526b8dd828bd51e4ae2f231263633fb2c2bfa2d5e4419132" Feb 16 21:22:44.521735 master-0 kubenswrapper[7926]: I0216 21:22:44.521490 7926 scope.go:117] "RemoveContainer" containerID="2dca4633ccf4f45bb4ab9181df018e7f5607187bc3ce7c60613bb7c75dbb3049" Feb 16 21:22:44.739201 master-0 kubenswrapper[7926]: I0216 21:22:44.739075 7926 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:22:44.739367 master-0 kubenswrapper[7926]: E0216 21:22:44.739336 7926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)\"" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" podUID="cef33294-81fb-41a2-811d-2565f94514d1" Feb 16 21:22:44.749920 master-0 kubenswrapper[7926]: I0216 21:22:44.749856 7926 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" path="/var/lib/kubelet/pods/5d1e91e5a1fed5cf7076a92d2830d36f/volumes" Feb 16 21:22:44.750550 master-0 kubenswrapper[7926]: I0216 21:22:44.750512 7926 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 21:22:49.522288 master-0 kubenswrapper[7926]: I0216 21:22:49.522087 7926 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 21:22:49.522355 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 21:22:49.542906 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 21:22:49.543150 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 21:22:49.544193 master-0 systemd[1]: kubelet.service: Consumed 3min 36.664s CPU time. Feb 16 21:22:49.587606 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 21:22:49.693074 master-0 kubenswrapper[38936]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 21:22:49.693074 master-0 kubenswrapper[38936]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 21:22:49.693074 master-0 kubenswrapper[38936]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 21:22:49.693074 master-0 kubenswrapper[38936]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 21:22:49.693600 master-0 kubenswrapper[38936]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 21:22:49.693600 master-0 kubenswrapper[38936]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 21:22:49.693600 master-0 kubenswrapper[38936]: I0216 21:22:49.693212 38936 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 21:22:49.696207 master-0 kubenswrapper[38936]: W0216 21:22:49.696181 38936 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 21:22:49.696207 master-0 kubenswrapper[38936]: W0216 21:22:49.696202 38936 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 21:22:49.696207 master-0 kubenswrapper[38936]: W0216 21:22:49.696209 38936 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696216 38936 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696221 38936 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696227 38936 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696232 38936 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696237 38936 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696242 38936 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696247 38936 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696252 38936 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696257 38936 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696262 38936 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696266 38936 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696271 38936 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696275 38936 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696280 38936 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696291 38936 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696296 38936 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696302 38936 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696306 38936 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696311 38936 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 21:22:49.696321 master-0 kubenswrapper[38936]: W0216 21:22:49.696316 38936 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696320 38936 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696325 38936 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696330 38936 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696335 38936 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696340 38936 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696344 38936 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696349 38936 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696353 38936 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696358 38936 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696365 38936 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696371 38936 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696375 38936 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696379 38936 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696384 38936 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696389 38936 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696393 38936 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696398 38936 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696403 38936 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696407 38936 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 21:22:49.698121 master-0 kubenswrapper[38936]: W0216 21:22:49.696412 38936 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696417 38936 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696421 38936 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696426 38936 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696430 38936 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696435 38936 feature_gate.go:330] unrecognized feature gate: Example Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696440 38936 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696445 38936 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696450 38936 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696455 38936 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696460 38936 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696464 38936 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696470 38936 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696475 38936 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696481 38936 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696486 38936 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696490 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696495 38936 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696499 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 21:22:49.698919 master-0 kubenswrapper[38936]: W0216 21:22:49.696507 38936 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: W0216 21:22:49.696513 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: W0216 21:22:49.696518 38936 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: W0216 21:22:49.696523 38936 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: W0216 21:22:49.696529 38936 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: W0216 21:22:49.696535 38936 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: W0216 21:22:49.696539 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: W0216 21:22:49.696545 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: W0216 21:22:49.696552 38936 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: W0216 21:22:49.696558 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: W0216 21:22:49.696563 38936 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: I0216 21:22:49.696686 38936 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: I0216 21:22:49.696699 38936 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: I0216 21:22:49.696707 38936 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: I0216 21:22:49.696714 38936 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: I0216 21:22:49.696720 38936 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: I0216 21:22:49.696726 38936 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: I0216 21:22:49.696733 38936 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: I0216 21:22:49.696740 38936 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: I0216 21:22:49.696745 38936 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: I0216 21:22:49.696751 38936 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 21:22:49.700199 master-0 kubenswrapper[38936]: I0216 21:22:49.696757 38936 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696762 38936 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696768 38936 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696773 38936 flags.go:64] FLAG: --cgroup-root="" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696778 38936 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696785 38936 flags.go:64] FLAG: --client-ca-file="" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696791 38936 flags.go:64] FLAG: --cloud-config="" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696796 38936 flags.go:64] FLAG: --cloud-provider="" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696801 38936 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696808 38936 flags.go:64] FLAG: --cluster-domain="" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696813 38936 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696819 38936 flags.go:64] FLAG: --config-dir="" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696825 38936 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696831 38936 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696837 38936 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696843 38936 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696848 38936 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696854 38936 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696860 38936 flags.go:64] FLAG: --contention-profiling="false" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696866 38936 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696871 38936 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696877 38936 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696882 38936 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696898 38936 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696904 38936 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 21:22:49.701068 master-0 kubenswrapper[38936]: I0216 21:22:49.696909 38936 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696915 38936 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696920 38936 flags.go:64] FLAG: --enable-server="true" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696925 38936 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696932 38936 flags.go:64] FLAG: --event-burst="100" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696937 38936 flags.go:64] FLAG: --event-qps="50" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696943 38936 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696948 38936 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696954 38936 flags.go:64] FLAG: --eviction-hard="" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696960 38936 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696966 38936 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696971 38936 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696977 38936 flags.go:64] FLAG: --eviction-soft="" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696982 38936 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696987 38936 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696993 38936 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.696998 38936 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.697003 38936 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.697009 38936 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.697014 38936 flags.go:64] FLAG: --feature-gates="" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.697020 38936 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.697026 38936 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.697032 38936 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.697037 38936 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.697043 38936 flags.go:64] FLAG: --healthz-port="10248" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.697048 38936 flags.go:64] FLAG: --help="false" Feb 16 21:22:49.702261 master-0 kubenswrapper[38936]: I0216 21:22:49.697054 38936 flags.go:64] FLAG: --hostname-override="" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697059 38936 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697064 38936 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697070 38936 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697077 38936 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697082 38936 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697088 38936 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697093 38936 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697098 38936 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697104 38936 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697110 38936 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697116 38936 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697121 38936 flags.go:64] FLAG: --kube-reserved="" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697126 38936 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697131 38936 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697137 38936 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697142 38936 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697147 38936 flags.go:64] FLAG: --lock-file="" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697152 38936 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697157 38936 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697163 38936 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697171 38936 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697177 38936 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697182 38936 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697188 38936 flags.go:64] FLAG: --logging-format="text" Feb 16 21:22:49.703055 master-0 kubenswrapper[38936]: I0216 21:22:49.697193 38936 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697199 38936 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697204 38936 flags.go:64] FLAG: --manifest-url="" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697209 38936 flags.go:64] FLAG: --manifest-url-header="" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697215 38936 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697221 38936 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697227 38936 flags.go:64] FLAG: --max-pods="110" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697233 38936 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697238 38936 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697244 38936 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697249 38936 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697254 38936 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697260 38936 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697265 38936 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697278 38936 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697285 38936 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697291 38936 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697296 38936 flags.go:64] FLAG: --pod-cidr="" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697301 38936 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697309 38936 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697314 38936 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697320 38936 flags.go:64] FLAG: --pods-per-core="0" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697325 38936 flags.go:64] FLAG: --port="10250" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697331 38936 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 21:22:49.703792 master-0 kubenswrapper[38936]: I0216 21:22:49.697336 38936 flags.go:64] FLAG: --provider-id="" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697341 38936 flags.go:64] FLAG: --qos-reserved="" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697346 38936 flags.go:64] FLAG: --read-only-port="10255" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697352 38936 flags.go:64] FLAG: --register-node="true" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697357 38936 flags.go:64] FLAG: --register-schedulable="true" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697363 38936 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697371 38936 flags.go:64] FLAG: --registry-burst="10" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697376 38936 flags.go:64] FLAG: --registry-qps="5" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697382 38936 flags.go:64] FLAG: --reserved-cpus="" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697387 38936 flags.go:64] FLAG: --reserved-memory="" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697393 38936 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697399 38936 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697404 38936 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697409 38936 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697414 38936 flags.go:64] FLAG: --runonce="false" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697419 38936 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697425 38936 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697430 38936 flags.go:64] FLAG: --seccomp-default="false" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697435 38936 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697440 38936 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697446 38936 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697452 38936 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697457 38936 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697463 38936 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697468 38936 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 21:22:49.705970 master-0 kubenswrapper[38936]: I0216 21:22:49.697473 38936 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697479 38936 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697485 38936 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697490 38936 flags.go:64] FLAG: --system-cgroups="" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697495 38936 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697503 38936 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697509 38936 flags.go:64] FLAG: --tls-cert-file="" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697514 38936 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697521 38936 flags.go:64] FLAG: --tls-min-version="" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697526 38936 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697532 38936 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697537 38936 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697542 38936 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697548 38936 flags.go:64] FLAG: --v="2" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697555 38936 flags.go:64] FLAG: --version="false" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697561 38936 flags.go:64] FLAG: --vmodule="" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697568 38936 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: I0216 21:22:49.697573 38936 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: W0216 21:22:49.697730 38936 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: W0216 21:22:49.697738 38936 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: W0216 21:22:49.697743 38936 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: W0216 21:22:49.697748 38936 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: W0216 21:22:49.697753 38936 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 21:22:49.706908 master-0 kubenswrapper[38936]: W0216 21:22:49.697765 38936 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697769 38936 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697774 38936 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697779 38936 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697784 38936 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697789 38936 feature_gate.go:330] unrecognized feature gate: Example Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697794 38936 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697800 38936 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697806 38936 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697811 38936 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697817 38936 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697837 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697842 38936 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697847 38936 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697852 38936 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697857 38936 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697862 38936 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697868 38936 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697873 38936 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697878 38936 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 21:22:49.707948 master-0 kubenswrapper[38936]: W0216 21:22:49.697883 38936 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697889 38936 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697895 38936 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697900 38936 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697904 38936 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697909 38936 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697913 38936 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697918 38936 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697923 38936 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697928 38936 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697932 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697937 38936 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697944 38936 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697948 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697953 38936 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697958 38936 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697963 38936 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697967 38936 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697972 38936 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697976 38936 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 21:22:49.708803 master-0 kubenswrapper[38936]: W0216 21:22:49.697981 38936 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.697986 38936 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.697990 38936 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.697995 38936 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.697999 38936 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698004 38936 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698009 38936 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698013 38936 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698018 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698022 38936 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698027 38936 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698032 38936 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698039 38936 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698045 38936 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698050 38936 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698055 38936 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698060 38936 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698068 38936 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698073 38936 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698078 38936 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 21:22:49.709296 master-0 kubenswrapper[38936]: W0216 21:22:49.698083 38936 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: W0216 21:22:49.698088 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: W0216 21:22:49.698093 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: W0216 21:22:49.698097 38936 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: W0216 21:22:49.698104 38936 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: W0216 21:22:49.698109 38936 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: W0216 21:22:49.698115 38936 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: I0216 21:22:49.698133 38936 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: I0216 21:22:49.702363 38936 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: I0216 21:22:49.702380 38936 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: W0216 21:22:49.702441 38936 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: W0216 21:22:49.702447 38936 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: W0216 21:22:49.702451 38936 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: W0216 21:22:49.702455 38936 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 21:22:49.709810 master-0 kubenswrapper[38936]: W0216 21:22:49.702459 38936 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702463 38936 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702467 38936 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702471 38936 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702474 38936 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702478 38936 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702481 38936 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702485 38936 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702489 38936 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702493 38936 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702496 38936 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702500 38936 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702504 38936 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702507 38936 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702512 38936 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702518 38936 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702523 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702527 38936 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702531 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702535 38936 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 21:22:49.710264 master-0 kubenswrapper[38936]: W0216 21:22:49.702538 38936 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702542 38936 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702545 38936 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702549 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702553 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702556 38936 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702560 38936 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702564 38936 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702568 38936 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702572 38936 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702576 38936 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702580 38936 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702585 38936 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702590 38936 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702594 38936 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702598 38936 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702601 38936 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702605 38936 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702609 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 21:22:49.711000 master-0 kubenswrapper[38936]: W0216 21:22:49.702613 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702617 38936 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702621 38936 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702625 38936 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702628 38936 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702632 38936 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702636 38936 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702639 38936 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702643 38936 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702651 38936 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702670 38936 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702675 38936 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702680 38936 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702684 38936 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702688 38936 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702692 38936 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702696 38936 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702700 38936 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702706 38936 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702712 38936 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 21:22:49.711716 master-0 kubenswrapper[38936]: W0216 21:22:49.702717 38936 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.702723 38936 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.702728 38936 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.702732 38936 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.702736 38936 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.702740 38936 feature_gate.go:330] unrecognized feature gate: Example Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.702744 38936 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.702912 38936 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.702918 38936 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: I0216 21:22:49.702923 38936 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.703023 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.703030 38936 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.703034 38936 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.703039 38936 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.703042 38936 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 21:22:49.712576 master-0 kubenswrapper[38936]: W0216 21:22:49.703046 38936 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703050 38936 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703054 38936 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703058 38936 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703061 38936 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703065 38936 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703069 38936 feature_gate.go:330] unrecognized feature gate: Example Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703073 38936 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703077 38936 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703081 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703084 38936 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703088 38936 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703091 38936 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703095 38936 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703098 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703102 38936 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703105 38936 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703109 38936 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703113 38936 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703116 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 21:22:49.713251 master-0 kubenswrapper[38936]: W0216 21:22:49.703120 38936 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703123 38936 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703127 38936 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703131 38936 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703134 38936 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703138 38936 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703141 38936 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703145 38936 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703153 38936 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703157 38936 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703160 38936 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703164 38936 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703168 38936 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703171 38936 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703175 38936 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703179 38936 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703182 38936 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703186 38936 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703190 38936 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703193 38936 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703197 38936 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 21:22:49.714499 master-0 kubenswrapper[38936]: W0216 21:22:49.703201 38936 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703206 38936 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703211 38936 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703215 38936 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703219 38936 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703224 38936 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703228 38936 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703231 38936 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703235 38936 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703239 38936 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703243 38936 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703268 38936 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703273 38936 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703276 38936 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703281 38936 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703286 38936 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703290 38936 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703294 38936 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703297 38936 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 21:22:49.715403 master-0 kubenswrapper[38936]: W0216 21:22:49.703301 38936 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: W0216 21:22:49.703304 38936 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: W0216 21:22:49.703308 38936 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: W0216 21:22:49.703313 38936 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: W0216 21:22:49.703318 38936 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: W0216 21:22:49.703323 38936 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: W0216 21:22:49.703327 38936 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: I0216 21:22:49.703333 38936 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: I0216 21:22:49.703455 38936 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: I0216 21:22:49.707845 38936 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: I0216 21:22:49.707942 38936 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: I0216 21:22:49.708218 38936 server.go:997] "Starting client certificate rotation" Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: I0216 21:22:49.708230 38936 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 21:22:49.716076 master-0 kubenswrapper[38936]: I0216 21:22:49.708425 38936 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 20:47:22 +0000 UTC, rotation deadline is 2026-02-17 18:05:59.911713148 +0000 UTC Feb 16 21:22:49.716436 master-0 kubenswrapper[38936]: I0216 21:22:49.708505 38936 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h43m10.203210909s for next certificate rotation Feb 16 21:22:49.716436 master-0 kubenswrapper[38936]: I0216 21:22:49.708993 38936 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 21:22:49.716436 master-0 kubenswrapper[38936]: I0216 21:22:49.710263 38936 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 21:22:49.716436 master-0 kubenswrapper[38936]: I0216 21:22:49.712285 38936 log.go:25] "Validated CRI v1 runtime API" Feb 16 21:22:49.716851 master-0 kubenswrapper[38936]: I0216 21:22:49.716817 38936 log.go:25] "Validated CRI v1 image API" Feb 16 21:22:49.717994 master-0 kubenswrapper[38936]: I0216 21:22:49.717929 38936 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 21:22:49.730206 master-0 kubenswrapper[38936]: I0216 21:22:49.730148 38936 fs.go:135] Filesystem UUIDs: map[3d9a04b0-92fb-4350-a5ea-d38e1e45e06e:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 21:22:49.731633 master-0 kubenswrapper[38936]: I0216 21:22:49.730252 38936 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0048dbcae18fdbd149a49da2679d70bbb9de5e907689064aaea0ab32348a1024/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0048dbcae18fdbd149a49da2679d70bbb9de5e907689064aaea0ab32348a1024/userdata/shm major:0 minor:745 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/017b12ba663cae17ffc7b3e8cac380511c7277e4c495d7f5a091fa50febd2724/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/017b12ba663cae17ffc7b3e8cac380511c7277e4c495d7f5a091fa50febd2724/userdata/shm major:0 minor:1331 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/02b45fb8e619cea5ccaf6f782fba75e7a7903a3e4348fde89b8d1bc48406b6c9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/02b45fb8e619cea5ccaf6f782fba75e7a7903a3e4348fde89b8d1bc48406b6c9/userdata/shm major:0 minor:724 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0334ad8c418e31c648e8c938f60c3ae9cf4f68761e776bef5ada2bade3f88833/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0334ad8c418e31c648e8c938f60c3ae9cf4f68761e776bef5ada2bade3f88833/userdata/shm major:0 minor:642 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/03ed4454e9c6237b864a1dab6c209256c79b0a72cb535e51a70e7b99d3f0689e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/03ed4454e9c6237b864a1dab6c209256c79b0a72cb535e51a70e7b99d3f0689e/userdata/shm major:0 minor:92 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/07e2ee4df3da5cd46dd10fb4afd51a212c46737743b9be4c1d162a76d568a6fd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/07e2ee4df3da5cd46dd10fb4afd51a212c46737743b9be4c1d162a76d568a6fd/userdata/shm major:0 minor:738 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0855efbb779255fb187bac22b944f8f2035fd58838e6517844db44571c397aae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0855efbb779255fb187bac22b944f8f2035fd58838e6517844db44571c397aae/userdata/shm major:0 minor:578 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0c4934055dbc002aad718ae831c2d636c9e3bd49545da85cae7eace9dea452ac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0c4934055dbc002aad718ae831c2d636c9e3bd49545da85cae7eace9dea452ac/userdata/shm major:0 minor:532 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0dfbee9f7528fe042540e180164336ecf2ece621fbebd18d9dde03c5a49a8d3a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0dfbee9f7528fe042540e180164336ecf2ece621fbebd18d9dde03c5a49a8d3a/userdata/shm major:0 minor:126 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/105b1eab12eec1f672058dc0900e8488b8bcca272b3ac3b2441b242d73128d7a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/105b1eab12eec1f672058dc0900e8488b8bcca272b3ac3b2441b242d73128d7a/userdata/shm major:0 minor:282 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/18445cef4b6797ad657a965be9f13f99564dcc29dc7e932a9b359ffe1a1aa1ce/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/18445cef4b6797ad657a965be9f13f99564dcc29dc7e932a9b359ffe1a1aa1ce/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1befa239880012918c5014596ebf2ea1e19a17105f1c62212a86bd3326b1986f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1befa239880012918c5014596ebf2ea1e19a17105f1c62212a86bd3326b1986f/userdata/shm major:0 minor:1106 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d4599582332a100db8555ba006867716892ce1ecdd5b2f904cbee81575c2c2d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d4599582332a100db8555ba006867716892ce1ecdd5b2f904cbee81575c2c2d/userdata/shm major:0 minor:1108 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1e734464d78209c21a7a9eb2f6d22c8584997def010318f287f0cb7c28b7390b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1e734464d78209c21a7a9eb2f6d22c8584997def010318f287f0cb7c28b7390b/userdata/shm major:0 minor:303 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1ff8802ad134d499fee700156b80ec71b617c31ecfda4162eeae2f5521b198f8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1ff8802ad134d499fee700156b80ec71b617c31ecfda4162eeae2f5521b198f8/userdata/shm major:0 minor:957 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/27e39bf106b6e002c0125d685214889286fc25d34ba09141b24632bec0751f4d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/27e39bf106b6e002c0125d685214889286fc25d34ba09141b24632bec0751f4d/userdata/shm major:0 minor:741 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2dfa08dcecf95c49e6db650a7dbdf117c27ed644f23ff4e264133dd36a509d3c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2dfa08dcecf95c49e6db650a7dbdf117c27ed644f23ff4e264133dd36a509d3c/userdata/shm major:0 minor:305 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33442d22098554ef2512c5bbab1d4a284aed4856345ee1eb8654ba065012ab94/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33442d22098554ef2512c5bbab1d4a284aed4856345ee1eb8654ba065012ab94/userdata/shm major:0 minor:675 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/385456702c716ef5052af7ff4f8c1f6423867ff9037ec0352d3bef2843cc7641/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/385456702c716ef5052af7ff4f8c1f6423867ff9037ec0352d3bef2843cc7641/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3edd59cb6b6314e671425a245027b79b2d561376466e447c62b29ac14f08bcff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3edd59cb6b6314e671425a245027b79b2d561376466e447c62b29ac14f08bcff/userdata/shm major:0 minor:967 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/401dbdafe44d87ba9ccf2adf090a2c537b4f84058eb049f0f6795c6752a1a8d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/401dbdafe44d87ba9ccf2adf090a2c537b4f84058eb049f0f6795c6752a1a8d0/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/404fdd69be202f40aeca377d1ba146b346077a53f8e7897ed4e324403366c1bf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/404fdd69be202f40aeca377d1ba146b346077a53f8e7897ed4e324403366c1bf/userdata/shm major:0 minor:1117 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4f2c49b4aa155e075775a0da6ce790eafb2a3d3e88c9dbca188493bbec98d810/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4f2c49b4aa155e075775a0da6ce790eafb2a3d3e88c9dbca188493bbec98d810/userdata/shm major:0 minor:300 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4ff1d9141076f81759691d94a098009541c5d2c236ef8864f1522766d2980580/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4ff1d9141076f81759691d94a098009541c5d2c236ef8864f1522766d2980580/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/611833cac10a2c7b92f524745bb3d40c37badfe83dfcc13e97aefe053823dfb9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/611833cac10a2c7b92f524745bb3d40c37badfe83dfcc13e97aefe053823dfb9/userdata/shm major:0 minor:443 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/62b7693910cb02952d8855d0ec6b5ec30d5524abd40344dea37279d475bce731/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/62b7693910cb02952d8855d0ec6b5ec30d5524abd40344dea37279d475bce731/userdata/shm major:0 minor:101 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6484af368276a809cf9fc113e39e94b58a7e749f404b7ad55bc0ffd6db6821c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6484af368276a809cf9fc113e39e94b58a7e749f404b7ad55bc0ffd6db6821c5/userdata/shm major:0 minor:97 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6caed68f3fc79ebb1ed9e5bfd3e9f6a4bad90b8a5cdeab5884b6fd52a2305c16/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6caed68f3fc79ebb1ed9e5bfd3e9f6a4bad90b8a5cdeab5884b6fd52a2305c16/userdata/shm major:0 minor:1080 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6d07de2e0be321a3aec4da12f4f04e483d7ebf0407264e8a59f6674bcacef82d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6d07de2e0be321a3aec4da12f4f04e483d7ebf0407264e8a59f6674bcacef82d/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/74e6be5033443384ea4bd5754c8e506826ab77e1e025ae4e7b5a3735350d70f2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/74e6be5033443384ea4bd5754c8e506826ab77e1e025ae4e7b5a3735350d70f2/userdata/shm major:0 minor:932 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/75ca3e4fc5da353a0ea31c674632f3429b17eb41f067d771200d9b0aea75af5d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/75ca3e4fc5da353a0ea31c674632f3429b17eb41f067d771200d9b0aea75af5d/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/75d47673076de0f457cf43f09abae17f313fa42a6b18d0c5e8749dffb9564806/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/75d47673076de0f457cf43f09abae17f313fa42a6b18d0c5e8749dffb9564806/userdata/shm major:0 minor:1118 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/76e543cc5345eb5c53417c9f0b565400b03593c03aa3a1637483c029bb868ef3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/76e543cc5345eb5c53417c9f0b565400b03593c03aa3a1637483c029bb868ef3/userdata/shm major:0 minor:166 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7836160a631ad4fabd13fade7e117d0a195ed40a8c1f33bde283fef44ab0f21f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7836160a631ad4fabd13fade7e117d0a195ed40a8c1f33bde283fef44ab0f21f/userdata/shm major:0 minor:743 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/846c42631e11b31d77d6f927ca22e80b7cd7d920231f1d2b9f1cfa12101d157e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/846c42631e11b31d77d6f927ca22e80b7cd7d920231f1d2b9f1cfa12101d157e/userdata/shm major:0 minor:915 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/89fb595810896fd574764c1b2babfd4babc84a77caf787d5018047df10f3ac86/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/89fb595810896fd574764c1b2babfd4babc84a77caf787d5018047df10f3ac86/userdata/shm major:0 minor:72 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8e70ffdd495dcdb270b1f5bf74d98194840c0bb5429461a2cbed334f4538aeec/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8e70ffdd495dcdb270b1f5bf74d98194840c0bb5429461a2cbed334f4538aeec/userdata/shm major:0 minor:95 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/957c111d10e2d292281a50f8cc278f441c1f3165b491de07cd91b63ab4d96530/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/957c111d10e2d292281a50f8cc278f441c1f3165b491de07cd91b63ab4d96530/userdata/shm major:0 minor:112 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95bb21eb958017bb1c79698309b67c3682dcd7011e9d5aacdb4e7366e93203b8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95bb21eb958017bb1c79698309b67c3682dcd7011e9d5aacdb4e7366e93203b8/userdata/shm major:0 minor:1320 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/98ea530a3e85a55d27f014bb670a7b7e4444aedc192a8b2618c4f1830394b65c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/98ea530a3e85a55d27f014bb670a7b7e4444aedc192a8b2618c4f1830394b65c/userdata/shm major:0 minor:1224 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99134c6775f2c1522a1480fdf36e455e0ea6704e4324711468efadafd1a4b744/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99134c6775f2c1522a1480fdf36e455e0ea6704e4324711468efadafd1a4b744/userdata/shm major:0 minor:577 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9b7b734a04c19ca82d24b6113d7260320b0a9c95bbc6375cd7e4100f7054eb3f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9b7b734a04c19ca82d24b6113d7260320b0a9c95bbc6375cd7e4100f7054eb3f/userdata/shm major:0 minor:1000 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9e9fb9a8fc61dba0936cd38d7b843d3efbdecc6ba9ec73f7423569f6305a4740/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9e9fb9a8fc61dba0936cd38d7b843d3efbdecc6ba9ec73f7423569f6305a4740/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a5c8e6b51575e43d26e0817313f1ec460f29cff6ceb6629a7a5e2f186f585513/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a5c8e6b51575e43d26e0817313f1ec460f29cff6ceb6629a7a5e2f186f585513/userdata/shm major:0 minor:91 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a99765f7253d989ecd2ebab9422f8bd50f36c587e8b7eca1057d0e88a540b814/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a99765f7253d989ecd2ebab9422f8bd50f36c587e8b7eca1057d0e88a540b814/userdata/shm major:0 minor:527 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/abcd1a63f33b879c154e1f80fc5ea3f4b46d9d1e7d2159b6ce5ac662b670e5ff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/abcd1a63f33b879c154e1f80fc5ea3f4b46d9d1e7d2159b6ce5ac662b670e5ff/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/acec58956615bf5fc5d4c728869e591e541d368aa9b045c7975cb5d8c938ff55/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/acec58956615bf5fc5d4c728869e591e541d368aa9b045c7975cb5d8c938ff55/userdata/shm major:0 minor:1004 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ad196ac4d2e3966bfb26599fb699f9a38a58beb4f2a551485dd0f16fe14d30d3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ad196ac4d2e3966bfb26599fb699f9a38a58beb4f2a551485dd0f16fe14d30d3/userdata/shm major:0 minor:90 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aed3d22aa5c102de3c056d7b1148ad38dc8f06e42bff2232e153f1a44338819c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aed3d22aa5c102de3c056d7b1148ad38dc8f06e42bff2232e153f1a44338819c/userdata/shm major:0 minor:1226 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b1c5e0970049830739dbde889218d9f83f1d9720ddba4de32c1b5bd6626ed51d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b1c5e0970049830739dbde889218d9f83f1d9720ddba4de32c1b5bd6626ed51d/userdata/shm major:0 minor:696 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b2fa0e56a1525a9dc4cb1eed44cc6376b6ac0d1c2fab2be1bd2cb007a4f90f8a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b2fa0e56a1525a9dc4cb1eed44cc6376b6ac0d1c2fab2be1bd2cb007a4f90f8a/userdata/shm major:0 minor:735 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b3fc27d6f88f12abb0f4db12508672dcd9584ab10707e7cd6f06dcebac1bbaa8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b3fc27d6f88f12abb0f4db12508672dcd9584ab10707e7cd6f06dcebac1bbaa8/userdata/shm major:0 minor:293 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4ab6f7d6521695677ac09385923bea0cfde2c320361c5f6cbe98ce64b7475b2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4ab6f7d6521695677ac09385923bea0cfde2c320361c5f6cbe98ce64b7475b2/userdata/shm major:0 minor:1292 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b9312957dc15df5de566304a0d01d6c55a3f6333b95b61734ba1c6f29131877b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b9312957dc15df5de566304a0d01d6c55a3f6333b95b61734ba1c6f29131877b/userdata/shm major:0 minor:707 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c073f224d2a8cc60c80044d595d19260d941f19b426f78dc52e84033ff1afedc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c073f224d2a8cc60c80044d595d19260d941f19b426f78dc52e84033ff1afedc/userdata/shm major:0 minor:299 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c4765e33cdc956d84e8349da9b28a001d07fad6c39b6a113416bb9d1d1ae88dd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c4765e33cdc956d84e8349da9b28a001d07fad6c39b6a113416bb9d1d1ae88dd/userdata/shm major:0 minor:482 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c6c5fc997a3d90f0f136390ca95bcbc1e110994ac3cdfcc2e3e8e90f78ca1dd9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c6c5fc997a3d90f0f136390ca95bcbc1e110994ac3cdfcc2e3e8e90f78ca1dd9/userdata/shm major:0 minor:537 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c8c3670530b0c671383aade45325850e12f9fcf9f76178c2929f043d5a9b72a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c8c3670530b0c671383aade45325850e12f9fcf9f76178c2929f043d5a9b72a3/userdata/shm major:0 minor:108 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cb7c3bcdaae372d84aa4e8a539ce094d23c02279631a56da69b150d86b62b5a5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cb7c3bcdaae372d84aa4e8a539ce094d23c02279631a56da69b150d86b62b5a5/userdata/shm major:0 minor:635 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cb99eaa7ceffb734068bb188738c361f8400867f02f0acef09f3dcc317540b0e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cb99eaa7ceffb734068bb188738c361f8400867f02f0acef09f3dcc317540b0e/userdata/shm major:0 minor:1238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cc46ef0ea78121e3debb45555162f099169024a83053e72fed30ccbe4c22554d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cc46ef0ea78121e3debb45555162f099169024a83053e72fed30ccbe4c22554d/userdata/shm major:0 minor:917 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cedd6b186b2f683612167b71883ce9d5bac09eb1edd2f0cb1e7e8286188d3035/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cedd6b186b2f683612167b71883ce9d5bac09eb1edd2f0cb1e7e8286188d3035/userdata/shm major:0 minor:580 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d1ce8d9ee7cab12610683fbe9731b9ea4f3d71878c552326acd5722dd5f1b61a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d1ce8d9ee7cab12610683fbe9731b9ea4f3d71878c552326acd5722dd5f1b61a/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d2b7935cea946c9f051bb808d0bcec166c533127cc006510308f2ece80cabd7f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d2b7935cea946c9f051bb808d0bcec166c533127cc006510308f2ece80cabd7f/userdata/shm major:0 minor:839 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d306354fd5d2178f348beb7a119f77d313ccc80e6928076b9869dfc8a33d0edf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d306354fd5d2178f348beb7a119f77d313ccc80e6928076b9869dfc8a33d0edf/userdata/shm major:0 minor:739 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d3122711a170f449cbae155070984deb894c3febeb5926b33f03b31158614e34/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d3122711a170f449cbae155070984deb894c3febeb5926b33f03b31158614e34/userdata/shm major:0 minor:784 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d3647391d6c6aea748cff19ab3829b4c4308cc4ee2ef9a5eb37149acfef03e2f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d3647391d6c6aea748cff19ab3829b4c4308cc4ee2ef9a5eb37149acfef03e2f/userdata/shm major:0 minor:492 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d731a0126023b327423b0d92ac9091c1188b42fa4686eb6ad7cba3b766448624/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d731a0126023b327423b0d92ac9091c1188b42fa4686eb6ad7cba3b766448624/userdata/shm major:0 minor:736 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d84a6211eba3f66c2ce7e68ab1344f23f51a23b55442aa18fdabbc1b25bc9adb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d84a6211eba3f66c2ce7e68ab1344f23f51a23b55442aa18fdabbc1b25bc9adb/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/db0925be9adc52361772ef921815ff9b0ca5417617347a7d9e8f0049e699014a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/db0925be9adc52361772ef921815ff9b0ca5417617347a7d9e8f0049e699014a/userdata/shm major:0 minor:629 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/db18d33d279edf734f31d955c318fccdcbf15241593b0786bf92a199ab2a428f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/db18d33d279edf734f31d955c318fccdcbf15241593b0786bf92a199ab2a428f/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/db8564acd67a0d7a69c00ddf2a89b541dc8e61594341a8f533db80c14da1c414/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/db8564acd67a0d7a69c00ddf2a89b541dc8e61594341a8f533db80c14da1c414/userdata/shm major:0 minor:628 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dbf32b84ea4131f980c7517f9adf09ab0debbea21b7d7312f8107de5103e23bd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dbf32b84ea4131f980c7517f9adf09ab0debbea21b7d7312f8107de5103e23bd/userdata/shm major:0 minor:437 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e1d55dfca25559f503e3ffffa2f5f036874c5ff002f21e1743ae94ece4a5c2a9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e1d55dfca25559f503e3ffffa2f5f036874c5ff002f21e1743ae94ece4a5c2a9/userdata/shm major:0 minor:966 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ebc8d1a24100c636c9029b0eba8d5b6521b906cdbb84675057a80b42a0273bbc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ebc8d1a24100c636c9029b0eba8d5b6521b906cdbb84675057a80b42a0273bbc/userdata/shm major:0 minor:143 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/edc9559c5a629f79661ac5fd3b656fc66e5b478f6eb97f32c266188a17c0e747/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/edc9559c5a629f79661ac5fd3b656fc66e5b478f6eb97f32c266188a17c0e747/userdata/shm major:0 minor:99 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f6ba9fbde2ec0f2099ab53176d9410c4bf53a78507ca46eeb7e91c2f36c118ed/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f6ba9fbde2ec0f2099ab53176d9410c4bf53a78507ca46eeb7e91c2f36c118ed/userdata/shm major:0 minor:718 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f94d68e1b5a31fd6ac38d04b76b6e3ee908e79aa67afc23e7d2bf54001deb6f0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f94d68e1b5a31fd6ac38d04b76b6e3ee908e79aa67afc23e7d2bf54001deb6f0/userdata/shm major:0 minor:487 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03a5021d-8a5c-4011-a9f9-c5eb38d5f236/volumes/kubernetes.io~projected/kube-api-access-ldzxc:{mountpoint:/var/lib/kubelet/pods/03a5021d-8a5c-4011-a9f9-c5eb38d5f236/volumes/kubernetes.io~projected/kube-api-access-ldzxc major:0 minor:909 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03a5021d-8a5c-4011-a9f9-c5eb38d5f236/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/03a5021d-8a5c-4011-a9f9-c5eb38d5f236/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:908 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/065fcd43-1572-4152-b77b-a6b7ab52a081/volumes/kubernetes.io~projected/kube-api-access-trcfg:{mountpoint:/var/lib/kubelet/pods/065fcd43-1572-4152-b77b-a6b7ab52a081/volumes/kubernetes.io~projected/kube-api-access-trcfg major:0 minor:384 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/065fcd43-1572-4152-b77b-a6b7ab52a081/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/065fcd43-1572-4152-b77b-a6b7ab52a081/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:383 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~projected/kube-api-access major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~secret/serving-cert major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d903d23-8e0b-424b-bcd0-e0a00f306e49/volumes/kubernetes.io~projected/kube-api-access-kcp5t:{mountpoint:/var/lib/kubelet/pods/0d903d23-8e0b-424b-bcd0-e0a00f306e49/volumes/kubernetes.io~projected/kube-api-access-kcp5t major:0 minor:433 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b/volumes/kubernetes.io~projected/kube-api-access-vddxb:{mountpoint:/var/lib/kubelet/pods/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b/volumes/kubernetes.io~projected/kube-api-access-vddxb major:0 minor:455 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b/volumes/kubernetes.io~secret/cert major:0 minor:985 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1489d1b6-d8a1-453a-bff3-8adfd4335903/volumes/kubernetes.io~projected/kube-api-access-xc47v:{mountpoint:/var/lib/kubelet/pods/1489d1b6-d8a1-453a-bff3-8adfd4335903/volumes/kubernetes.io~projected/kube-api-access-xc47v major:0 minor:655 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1489d1b6-d8a1-453a-bff3-8adfd4335903/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1489d1b6-d8a1-453a-bff3-8adfd4335903/volumes/kubernetes.io~secret/serving-cert major:0 minor:335 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:563 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64/volumes/kubernetes.io~empty-dir/tmp major:0 minor:562 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64/volumes/kubernetes.io~projected/kube-api-access-cdx88:{mountpoint:/var/lib/kubelet/pods/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64/volumes/kubernetes.io~projected/kube-api-access-cdx88 major:0 minor:564 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/volumes/kubernetes.io~projected/ca-certs major:0 minor:516 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/volumes/kubernetes.io~projected/kube-api-access-vxtft:{mountpoint:/var/lib/kubelet/pods/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/volumes/kubernetes.io~projected/kube-api-access-vxtft major:0 minor:536 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~projected/kube-api-access-x7pk6:{mountpoint:/var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~projected/kube-api-access-x7pk6 major:0 minor:107 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d453639-52ed-4a14-a2ee-02cf9acc2f7c/volumes/kubernetes.io~projected/kube-api-access-59kpw:{mountpoint:/var/lib/kubelet/pods/1d453639-52ed-4a14-a2ee-02cf9acc2f7c/volumes/kubernetes.io~projected/kube-api-access-59kpw major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d453639-52ed-4a14-a2ee-02cf9acc2f7c/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/1d453639-52ed-4a14-a2ee-02cf9acc2f7c/volumes/kubernetes.io~secret/metrics-certs major:0 minor:733 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d7d0416-5f50-42bd-826b-92eecf9adcec/volumes/kubernetes.io~projected/kube-api-access-25mkq:{mountpoint:/var/lib/kubelet/pods/1d7d0416-5f50-42bd-826b-92eecf9adcec/volumes/kubernetes.io~projected/kube-api-access-25mkq major:0 minor:958 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d7d0416-5f50-42bd-826b-92eecf9adcec/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/1d7d0416-5f50-42bd-826b-92eecf9adcec/volumes/kubernetes.io~secret/cert major:0 minor:947 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/230d9624-2d9d-4036-967b-b530347f05d5/volumes/kubernetes.io~projected/kube-api-access-vqkvs:{mountpoint:/var/lib/kubelet/pods/230d9624-2d9d-4036-967b-b530347f05d5/volumes/kubernetes.io~projected/kube-api-access-vqkvs major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/230d9624-2d9d-4036-967b-b530347f05d5/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/230d9624-2d9d-4036-967b-b530347f05d5/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:93 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~projected/kube-api-access-6qd6r:{mountpoint:/var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~projected/kube-api-access-6qd6r major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:519 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:520 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~projected/kube-api-access-hjsnz:{mountpoint:/var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~projected/kube-api-access-hjsnz major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~projected/kube-api-access-9pw88:{mountpoint:/var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~projected/kube-api-access-9pw88 major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~secret/serving-cert major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf/volumes/kubernetes.io~projected/kube-api-access-64qvl:{mountpoint:/var/lib/kubelet/pods/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf/volumes/kubernetes.io~projected/kube-api-access-64qvl major:0 minor:633 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf/volumes/kubernetes.io~secret/metrics-tls major:0 minor:641 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~projected/kube-api-access-nrc7l:{mountpoint:/var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~projected/kube-api-access-nrc7l major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~secret/srv-cert major:0 minor:728 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/302156cc-9dca-4a66-9e6a-ba2c7e738c92/volumes/kubernetes.io~projected/kube-api-access-zxcg6:{mountpoint:/var/lib/kubelet/pods/302156cc-9dca-4a66-9e6a-ba2c7e738c92/volumes/kubernetes.io~projected/kube-api-access-zxcg6 major:0 minor:828 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/302156cc-9dca-4a66-9e6a-ba2c7e738c92/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/302156cc-9dca-4a66-9e6a-ba2c7e738c92/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:827 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/319dc882-e1f5-40f9-99f4-2bae028337e5/volumes/kubernetes.io~projected/kube-api-access-mtrzq:{mountpoint:/var/lib/kubelet/pods/319dc882-e1f5-40f9-99f4-2bae028337e5/volumes/kubernetes.io~projected/kube-api-access-mtrzq major:0 minor:906 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/319dc882-e1f5-40f9-99f4-2bae028337e5/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/319dc882-e1f5-40f9-99f4-2bae028337e5/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:905 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/319dc882-e1f5-40f9-99f4-2bae028337e5/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/319dc882-e1f5-40f9-99f4-2bae028337e5/volumes/kubernetes.io~secret/webhook-cert major:0 minor:904 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3403d2bf-b093-4f2e-80aa-73a3d6bcaffb/volumes/kubernetes.io~projected/kube-api-access-gxhfs:{mountpoint:/var/lib/kubelet/pods/3403d2bf-b093-4f2e-80aa-73a3d6bcaffb/volumes/kubernetes.io~projected/kube-api-access-gxhfs major:0 minor:1103 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34743ce3-5eda-4c60-99cb-640dd067ebdf/volumes/kubernetes.io~projected/kube-api-access-vzm2t:{mountpoint:/var/lib/kubelet/pods/34743ce3-5eda-4c60-99cb-640dd067ebdf/volumes/kubernetes.io~projected/kube-api-access-vzm2t major:0 minor:634 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5/volumes/kubernetes.io~projected/kube-api-access-dgjlj:{mountpoint:/var/lib/kubelet/pods/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5/volumes/kubernetes.io~projected/kube-api-access-dgjlj major:0 minor:1315 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4085413c-9af1-4d2a-ba0f-33b42025cb7f/volumes/kubernetes.io~projected/kube-api-access-dw9lp:{mountpoint:/var/lib/kubelet/pods/4085413c-9af1-4d2a-ba0f-33b42025cb7f/volumes/kubernetes.io~projected/kube-api-access-dw9lp major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/408a9364-3730-4017-b1e4-c85d6a504168/volumes/kubernetes.io~projected/kube-api-access-lvw2m:{mountpoint:/var/lib/kubelet/pods/408a9364-3730-4017-b1e4-c85d6a504168/volumes/kubernetes.io~projected/kube-api-access-lvw2m major:0 minor:454 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/408a9364-3730-4017-b1e4-c85d6a504168/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/408a9364-3730-4017-b1e4-c85d6a504168/volumes/kubernetes.io~secret/serving-cert major:0 minor:447 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd/volumes/kubernetes.io~projected/kube-api-access-p7wrr:{mountpoint:/var/lib/kubelet/pods/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd/volumes/kubernetes.io~projected/kube-api-access-p7wrr major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd/volumes/kubernetes.io~secret/metrics-tls major:0 minor:575 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~projected/kube-api-access-b6wng:{mountpoint:/var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~projected/kube-api-access-b6wng major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~projected/kube-api-access-xrc4z:{mountpoint:/var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~projected/kube-api-access-xrc4z major:0 minor:1291 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1290 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1282 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1286 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b035e85-b2b0-4dee-bb86-3465fc4b98a8/volumes/kubernetes.io~projected/kube-api-access-g7nmb:{mountpoint:/var/lib/kubelet/pods/4b035e85-b2b0-4dee-bb86-3465fc4b98a8/volumes/kubernetes.io~projected/kube-api-access-g7nmb major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4b035e85-b2b0-4dee-bb86-3465fc4b98a8/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/4b035e85-b2b0-4dee-bb86-3465fc4b98a8/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:729 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55095f4f-cac0-456c-9ccc-45869392408c/volumes/kubernetes.io~projected/kube-api-access-7hnc6:{mountpoint:/var/lib/kubelet/pods/55095f4f-cac0-456c-9ccc-45869392408c/volumes/kubernetes.io~projected/kube-api-access-7hnc6 major:0 minor:913 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55095f4f-cac0-456c-9ccc-45869392408c/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/55095f4f-cac0-456c-9ccc-45869392408c/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:912 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~projected/kube-api-access-vklwz:{mountpoint:/var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~projected/kube-api-access-vklwz major:0 minor:276 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~secret/serving-cert major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~projected/kube-api-access-dfmv6:{mountpoint:/var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~projected/kube-api-access-dfmv6 major:0 minor:270 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62935559-041f-4694-9d36-adc809d079b4/volumes/kubernetes.io~projected/kube-api-access-6sq4t:{mountpoint:/var/lib/kubelet/pods/62935559-041f-4694-9d36-adc809d079b4/volumes/kubernetes.io~projected/kube-api-access-6sq4t major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/684a8167-6c5b-430f-979e-307e58487611/volumes/kubernetes.io~projected/kube-api-access-s9w8k:{mountpoint:/var/lib/kubelet/pods/684a8167-6c5b-430f-979e-307e58487611/volumes/kubernetes.io~projected/kube-api-access-s9w8k major:0 minor:483 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~projected/kube-api-access-7bcmr:{mountpoint:/var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~projected/kube-api-access-7bcmr major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~secret/serving-cert major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~projected/kube-api-access-dqm46:{mountpoint:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~projected/kube-api-access-dqm46 major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~projected/kube-api-access-mkz65:{mountpoint:/var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~projected/kube-api-access-mkz65 major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~projected/kube-api-access-ll4rg:{mountpoint:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~projected/kube-api-access-ll4rg major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/etcd-client major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d6eb694-9a3d-49d1-bbc1-74ba4450d673/volumes/kubernetes.io~projected/kube-api-access-6jh6l:{mountpoint:/var/lib/kubelet/pods/7d6eb694-9a3d-49d1-bbc1-74ba4450d673/volumes/kubernetes.io~projected/kube-api-access-6jh6l major:0 minor:1232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d6eb694-9a3d-49d1-bbc1-74ba4450d673/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/7d6eb694-9a3d-49d1-bbc1-74ba4450d673/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d6eb694-9a3d-49d1-bbc1-74ba4450d673/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/7d6eb694-9a3d-49d1-bbc1-74ba4450d673/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~projected/kube-api-access major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~secret/serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/853452fb-1035-4f22-8aeb-9043d150e8ca/volumes/kubernetes.io~projected/kube-api-access-zqkgp:{mountpoint:/var/lib/kubelet/pods/853452fb-1035-4f22-8aeb-9043d150e8ca/volumes/kubernetes.io~projected/kube-api-access-zqkgp major:0 minor:47 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/88c9d2fb-763f-4405-8d1a-c39039b41d3b/volumes/kubernetes.io~projected/kube-api-access-8qcq9:{mountpoint:/var/lib/kubelet/pods/88c9d2fb-763f-4405-8d1a-c39039b41d3b/volumes/kubernetes.io~projected/kube-api-access-8qcq9 major:0 minor:997 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/88c9d2fb-763f-4405-8d1a-c39039b41d3b/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/88c9d2fb-763f-4405-8d1a-c39039b41d3b/volumes/kubernetes.io~secret/proxy-tls major:0 minor:982 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~projected/kube-api-access-x85fb:{mountpoint:/var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~projected/kube-api-access-x85fb major:0 minor:161 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~secret/webhook-cert major:0 minor:165 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b648d9e-a892-4951-b0e2-fed6b16273d4/volumes/kubernetes.io~projected/kube-api-access-sgj2q:{mountpoint:/var/lib/kubelet/pods/8b648d9e-a892-4951-b0e2-fed6b16273d4/volumes/kubernetes.io~projected/kube-api-access-sgj2q major:0 minor:926 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b648d9e-a892-4951-b0e2-fed6b16273d4/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/8b648d9e-a892-4951-b0e2-fed6b16273d4/volumes/kubernetes.io~secret/cert major:0 minor:921 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b648d9e-a892-4951-b0e2-fed6b16273d4/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/8b648d9e-a892-4951-b0e2-fed6b16273d4/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:920 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d56b871-a53a-4928-8967-a33ea9dcec2a/volumes/kubernetes.io~projected/kube-api-access-22pl9:{mountpoint:/var/lib/kubelet/pods/8d56b871-a53a-4928-8967-a33ea9dcec2a/volumes/kubernetes.io~projected/kube-api-access-22pl9 major:0 minor:1330 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d56b871-a53a-4928-8967-a33ea9dcec2a/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/8d56b871-a53a-4928-8967-a33ea9dcec2a/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1326 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/913951bb-1702-4b71-862c-a166bc7a62e0/volumes/kubernetes.io~projected/kube-api-access-pgvx2:{mountpoint:/var/lib/kubelet/pods/913951bb-1702-4b71-862c-a166bc7a62e0/volumes/kubernetes.io~projected/kube-api-access-pgvx2 major:0 minor:1116 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/913951bb-1702-4b71-862c-a166bc7a62e0/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/913951bb-1702-4b71-862c-a166bc7a62e0/volumes/kuberne Feb 16 21:22:49.732007 master-0 kubenswrapper[38936]: tes.io~secret/certs major:0 minor:1092 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/913951bb-1702-4b71-862c-a166bc7a62e0/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/913951bb-1702-4b71-862c-a166bc7a62e0/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1104 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3/volumes/kubernetes.io~projected/kube-api-access-hv45g:{mountpoint:/var/lib/kubelet/pods/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3/volumes/kubernetes.io~projected/kube-api-access-hv45g major:0 minor:502 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3/volumes/kubernetes.io~secret/signing-key major:0 minor:501 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/kube-api-access-z9vmp:{mountpoint:/var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/kube-api-access-z9vmp major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:576 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0b7a368-1408-4fc3-ae25-4613b74e7fca/volumes/kubernetes.io~projected/kube-api-access-98n4h:{mountpoint:/var/lib/kubelet/pods/a0b7a368-1408-4fc3-ae25-4613b74e7fca/volumes/kubernetes.io~projected/kube-api-access-98n4h major:0 minor:1156 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0b7a368-1408-4fc3-ae25-4613b74e7fca/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/a0b7a368-1408-4fc3-ae25-4613b74e7fca/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1154 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0b7a368-1408-4fc3-ae25-4613b74e7fca/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/a0b7a368-1408-4fc3-ae25-4613b74e7fca/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:518 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~projected/kube-api-access-8sd27:{mountpoint:/var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~projected/kube-api-access-8sd27 major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~secret/srv-cert major:0 minor:732 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5d4ac48-aed3-46b9-9b2a-d741121e05b4/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a5d4ac48-aed3-46b9-9b2a-d741121e05b4/volumes/kubernetes.io~projected/kube-api-access major:0 minor:692 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5d4ac48-aed3-46b9-9b2a-d741121e05b4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a5d4ac48-aed3-46b9-9b2a-d741121e05b4/volumes/kubernetes.io~secret/serving-cert major:0 minor:678 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa2e9bbc-3962-45f5-a7cc-2dc059409e70/volumes/kubernetes.io~projected/kube-api-access-wx8bf:{mountpoint:/var/lib/kubelet/pods/aa2e9bbc-3962-45f5-a7cc-2dc059409e70/volumes/kubernetes.io~projected/kube-api-access-wx8bf major:0 minor:627 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa2e9bbc-3962-45f5-a7cc-2dc059409e70/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/aa2e9bbc-3962-45f5-a7cc-2dc059409e70/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:964 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1ac9776-54c4-46ce-b898-01c8cf35e593/volumes/kubernetes.io~projected/kube-api-access-vzx4s:{mountpoint:/var/lib/kubelet/pods/b1ac9776-54c4-46ce-b898-01c8cf35e593/volumes/kubernetes.io~projected/kube-api-access-vzx4s major:0 minor:491 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b27de289-c0f9-47ff-aac6-15b7bc1b178a/volumes/kubernetes.io~projected/kube-api-access-fx4tz:{mountpoint:/var/lib/kubelet/pods/b27de289-c0f9-47ff-aac6-15b7bc1b178a/volumes/kubernetes.io~projected/kube-api-access-fx4tz major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b27de289-c0f9-47ff-aac6-15b7bc1b178a/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/b27de289-c0f9-47ff-aac6-15b7bc1b178a/volumes/kubernetes.io~secret/webhook-certs major:0 minor:730 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b27e0202-8bdb-4a36-8c3e-0c203f7665b8/volumes/kubernetes.io~projected/kube-api-access-zmvtk:{mountpoint:/var/lib/kubelet/pods/b27e0202-8bdb-4a36-8c3e-0c203f7665b8/volumes/kubernetes.io~projected/kube-api-access-zmvtk major:0 minor:73 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b28234d1-1d9a-4d9f-9ad1-e3c682bed492/volumes/kubernetes.io~projected/kube-api-access-67qzh:{mountpoint:/var/lib/kubelet/pods/b28234d1-1d9a-4d9f-9ad1-e3c682bed492/volumes/kubernetes.io~projected/kube-api-access-67qzh major:0 minor:285 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b28234d1-1d9a-4d9f-9ad1-e3c682bed492/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/b28234d1-1d9a-4d9f-9ad1-e3c682bed492/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:731 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba294358-051a-4f09-b182-710d3d6778c5/volumes/kubernetes.io~projected/kube-api-access-qf2w4:{mountpoint:/var/lib/kubelet/pods/ba294358-051a-4f09-b182-710d3d6778c5/volumes/kubernetes.io~projected/kube-api-access-qf2w4 major:0 minor:976 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba294358-051a-4f09-b182-710d3d6778c5/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/ba294358-051a-4f09-b182-710d3d6778c5/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:975 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~projected/kube-api-access-kf4qg:{mountpoint:/var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~projected/kube-api-access-kf4qg major:0 minor:723 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~secret/encryption-config major:0 minor:722 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~secret/etcd-client major:0 minor:704 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~secret/serving-cert major:0 minor:721 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~projected/kube-api-access-qx2kd:{mountpoint:/var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~projected/kube-api-access-qx2kd major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~projected/kube-api-access-7xgcn:{mountpoint:/var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~projected/kube-api-access-7xgcn major:0 minor:1105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~secret/default-certificate major:0 minor:1097 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1099 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~secret/stats-auth major:0 minor:1096 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce229d27-837d-4a98-80fc-d56877ae39b8/volumes/kubernetes.io~projected/kube-api-access-dcwzq:{mountpoint:/var/lib/kubelet/pods/ce229d27-837d-4a98-80fc-d56877ae39b8/volumes/kubernetes.io~projected/kube-api-access-dcwzq major:0 minor:589 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/kube-api-access-5tklr:{mountpoint:/var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/kube-api-access-5tklr major:0 minor:281 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~secret/metrics-tls major:0 minor:547 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~projected/kube-api-access-x4djt:{mountpoint:/var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~projected/kube-api-access-x4djt major:0 minor:561 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~secret/encryption-config major:0 minor:555 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~secret/etcd-client major:0 minor:560 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~secret/serving-cert major:0 minor:621 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d8d648c7-b84b-4f43-84c9-903aead0891a/volumes/kubernetes.io~projected/kube-api-access-nq9c5:{mountpoint:/var/lib/kubelet/pods/d8d648c7-b84b-4f43-84c9-903aead0891a/volumes/kubernetes.io~projected/kube-api-access-nq9c5 major:0 minor:45 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9d71a7a-a751-4de4-9c76-9bac85fe0177/volumes/kubernetes.io~projected/kube-api-access-jkdzb:{mountpoint:/var/lib/kubelet/pods/d9d71a7a-a751-4de4-9c76-9bac85fe0177/volumes/kubernetes.io~projected/kube-api-access-jkdzb major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da07cd48-b1e8-4ccc-b980-84702cedb042/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/da07cd48-b1e8-4ccc-b980-84702cedb042/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1098 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~projected/kube-api-access major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~secret/serving-cert major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8194cdc-3133-49e2-9579-a747c0bf2b16/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/e8194cdc-3133-49e2-9579-a747c0bf2b16/volumes/kubernetes.io~projected/ca-certs major:0 minor:74 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8194cdc-3133-49e2-9579-a747c0bf2b16/volumes/kubernetes.io~projected/kube-api-access-hxvhm:{mountpoint:/var/lib/kubelet/pods/e8194cdc-3133-49e2-9579-a747c0bf2b16/volumes/kubernetes.io~projected/kube-api-access-hxvhm major:0 minor:535 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8194cdc-3133-49e2-9579-a747c0bf2b16/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/e8194cdc-3133-49e2-9579-a747c0bf2b16/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:533 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9615af2-cad5-4705-9c2f-6f3c97026100/volumes/kubernetes.io~projected/kube-api-access-npfk7:{mountpoint:/var/lib/kubelet/pods/e9615af2-cad5-4705-9c2f-6f3c97026100/volumes/kubernetes.io~projected/kube-api-access-npfk7 major:0 minor:625 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9615af2-cad5-4705-9c2f-6f3c97026100/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e9615af2-cad5-4705-9c2f-6f3c97026100/volumes/kubernetes.io~secret/serving-cert major:0 minor:626 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9bd1f48-6d45-4045-b18e-46ce3005d51d/volumes/kubernetes.io~projected/kube-api-access-wckst:{mountpoint:/var/lib/kubelet/pods/e9bd1f48-6d45-4045-b18e-46ce3005d51d/volumes/kubernetes.io~projected/kube-api-access-wckst major:0 minor:1234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9bd1f48-6d45-4045-b18e-46ce3005d51d/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/e9bd1f48-6d45-4045-b18e-46ce3005d51d/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9bd1f48-6d45-4045-b18e-46ce3005d51d/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/e9bd1f48-6d45-4045-b18e-46ce3005d51d/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec7dd4ea-a139-45d4-96a4-506da1567292/volumes/kubernetes.io~projected/kube-api-access-9jt7h:{mountpoint:/var/lib/kubelet/pods/ec7dd4ea-a139-45d4-96a4-506da1567292/volumes/kubernetes.io~projected/kube-api-access-9jt7h major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec7dd4ea-a139-45d4-96a4-506da1567292/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/ec7dd4ea-a139-45d4-96a4-506da1567292/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:734 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f275e79f-923c-4d3a-8ed4-084a122ddcf4/volumes/kubernetes.io~projected/kube-api-access-cmn29:{mountpoint:/var/lib/kubelet/pods/f275e79f-923c-4d3a-8ed4-084a122ddcf4/volumes/kubernetes.io~projected/kube-api-access-cmn29 major:0 minor:977 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7b30888-5994-4968-9db6-9533ac60c92e/volumes/kubernetes.io~projected/kube-api-access-fbfdg:{mountpoint:/var/lib/kubelet/pods/f7b30888-5994-4968-9db6-9533ac60c92e/volumes/kubernetes.io~projected/kube-api-access-fbfdg major:0 minor:1233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7b30888-5994-4968-9db6-9533ac60c92e/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/f7b30888-5994-4968-9db6-9533ac60c92e/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7b30888-5994-4968-9db6-9533ac60c92e/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/f7b30888-5994-4968-9db6-9533ac60c92e/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb1eac23-18a5-4706-adcd-81d83e04cd12/volumes/kubernetes.io~projected/kube-api-access-8vcsp:{mountpoint:/var/lib/kubelet/pods/fb1eac23-18a5-4706-adcd-81d83e04cd12/volumes/kubernetes.io~projected/kube-api-access-8vcsp major:0 minor:1072 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb1eac23-18a5-4706-adcd-81d83e04cd12/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/fb1eac23-18a5-4706-adcd-81d83e04cd12/volumes/kubernetes.io~secret/proxy-tls major:0 minor:1060 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ff193060-a272-4e4e-990a-83ac410f523d/volumes/kubernetes.io~projected/kube-api-access-wmhq9:{mountpoint:/var/lib/kubelet/pods/ff193060-a272-4e4e-990a-83ac410f523d/volumes/kubernetes.io~projected/kube-api-access-wmhq9 major:0 minor:620 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ff193060-a272-4e4e-990a-83ac410f523d/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/ff193060-a272-4e4e-990a-83ac410f523d/volumes/kubernetes.io~secret/proxy-tls major:0 minor:619 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/f25191b2045cf9cfd3b3ef0a303d5e95112053103fcb82df1e777eb98da08c41/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-1006:{mountpoint:/var/lib/containers/storage/overlay/5f12ba551157f40cfec247dc9a6bb156b6db29e44b8d7cb0ca3edae8e165f135/merged major:0 minor:1006 fsType:overlay blockSize:0} overlay_0-1008:{mountpoint:/var/lib/containers/storage/overlay/b47760c60991b7a61ac579bbf2352cb96877f7280c5590436256400013634885/merged major:0 minor:1008 fsType:overlay blockSize:0} overlay_0-1010:{mountpoint:/var/lib/containers/storage/overlay/c7754db36b5ea490e4cb3d9a4e2f3e26fe2cc9f70d7a214770a1f093a1c73153/merged major:0 minor:1010 fsType:overlay blockSize:0} overlay_0-1015:{mountpoint:/var/lib/containers/storage/overlay/fdd0dc5dbd907a0c781cc0380223f5dff0b83819867491d8a6d0e179d9f05af0/merged major:0 minor:1015 fsType:overlay blockSize:0} overlay_0-1017:{mountpoint:/var/lib/containers/storage/overlay/c7d66b4442e77530b491c6cc718d8daeb421d12e58eacab68ebba8aa81bfde9b/merged major:0 minor:1017 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/ef932d76d15b9fcafa4ecdada5146eb7a33114e691ac2b27f8675abf7e3a3bef/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1025:{mountpoint:/var/lib/containers/storage/overlay/d846ca1b5e56aee3fbd7161baf10ca0d4eac6c92983f7a651ce4e0c2e73f189d/merged major:0 minor:1025 fsType:overlay blockSize:0} overlay_0-1029:{mountpoint:/var/lib/containers/storage/overlay/aca645082d1e99536669abcfcbbd3a05e009862e501fdc06e9b2b41867ab61eb/merged major:0 minor:1029 fsType:overlay blockSize:0} overlay_0-1034:{mountpoint:/var/lib/containers/storage/overlay/7f3173623644076341918d15af5f99de16ab9df436ca453ccf2316417c80586e/merged major:0 minor:1034 fsType:overlay blockSize:0} overlay_0-1036:{mountpoint:/var/lib/containers/storage/overlay/ead224107ad82ba2fe7f74df074e722609bf7c442d0af2d6707877af2fd77a1a/merged major:0 minor:1036 fsType:overlay blockSize:0} overlay_0-1039:{mountpoint:/var/lib/containers/storage/overlay/be6f6a68639c76ca8fc34b0a59a12c34c5d677d67a050baa70a8c53e0c1c41c2/merged major:0 minor:1039 fsType:overlay blockSize:0} overlay_0-1041:{mountpoint:/var/lib/containers/storage/overlay/0b49251b2bd3913456706d3b302a199e79a0b54c881d7d428b57e3924f2b80fc/merged major:0 minor:1041 fsType:overlay blockSize:0} overlay_0-105:{mountpoint:/var/lib/containers/storage/overlay/4394a5b73a649bc920d0850a3520f2fcfb57f6af97c3cd5be39981acc05400de/merged major:0 minor:105 fsType:overlay blockSize:0} overlay_0-1050:{mountpoint:/var/lib/containers/storage/overlay/bdecbf99e052095dc580bb1b38b68ade9fb224a3c8b766d66158ad5035793f20/merged major:0 minor:1050 fsType:overlay blockSize:0} overlay_0-1054:{mountpoint:/var/lib/containers/storage/overlay/7b1dbbc73e96b946632d1896ff36149306d33cbca9022e59a3ffb52636d506e9/merged major:0 minor:1054 fsType:overlay blockSize:0} overlay_0-1067:{mountpoint:/var/lib/containers/storage/overlay/c3c1d6c0962dc8b1cb7fcbe43897d85792063cf4617ed1b3352999bff8146017/merged major:0 minor:1067 fsType:overlay blockSize:0} overlay_0-1073:{mountpoint:/var/lib/containers/storage/overlay/794e21ef6a7c64c966f63f23581cda9629ce688da1caf0dcc9bd3bb5ad2c6165/merged major:0 minor:1073 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/2eb50790bfbeeb8207871c49eec66cd2a2249657e1039ab828574f12565082e7/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/63254bed7ac85fe52f285784b702d3f67c38f4c51112d263b450d86f56040e63/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1086:{mountpoint:/var/lib/containers/storage/overlay/da74aede35b25392233f74a1ed5175b0c8d081f017b624090640c2f2b95fa3c9/merged major:0 minor:1086 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/a16af00a96418556609dc90ef526c14696c94f227e4e7265923a2b5373725194/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-1114:{mountpoint:/var/lib/containers/storage/overlay/e24969b3f1d67799fb93b645cf95bf9750c3019bcf26e33a43ce1c4b17477039/merged major:0 minor:1114 fsType:overlay blockSize:0} overlay_0-1120:{mountpoint:/var/lib/containers/storage/overlay/606ffb36cf4e30860046640d9a6675eac3533f15cdb36e045a0e55de5d9b5e16/merged major:0 minor:1120 fsType:overlay blockSize:0} overlay_0-1123:{mountpoint:/var/lib/containers/storage/overlay/7bf4115d1ee5eb8dcb35231c0df0e1f11803cca785f7e3c25d9cb855580fc106/merged major:0 minor:1123 fsType:overlay blockSize:0} overlay_0-1124:{mountpoint:/var/lib/containers/storage/overlay/001c9eed460e778d6685ae31429302433d6b394968beeafa15f6d56ec1c1be83/merged major:0 minor:1124 fsType:overlay blockSize:0} overlay_0-1126:{mountpoint:/var/lib/containers/storage/overlay/7aae5ab2d07645be727329ae344950449b9374ef965826ac9d982d8d966fd031/merged major:0 minor:1126 fsType:overlay blockSize:0} overlay_0-1128:{mountpoint:/var/lib/containers/storage/overlay/4903b971fb7a4aac514433026adba17790386016bd26eb0bbd677b7f81f71314/merged major:0 minor:1128 fsType:overlay blockSize:0} overlay_0-1129:{mountpoint:/var/lib/containers/storage/overlay/1da46be66ce53e79757e73327ff26623d906c6528e03979d7783c6205cebaaf5/merged major:0 minor:1129 fsType:overlay blockSize:0} overlay_0-1132:{mountpoint:/var/lib/containers/storage/overlay/d5ea9713e025ababba60443af4ae2a29f64ac7691cc9921a787d8afc198adc72/merged major:0 minor:1132 fsType:overlay blockSize:0} overlay_0-1135:{mountpoint:/var/lib/containers/storage/overlay/c09564e9a21fddc90af1034d2e19886f80c83338b40b33c532eb94e796f5ab1e/merged major:0 minor:1135 fsType:overlay blockSize:0} overlay_0-1137:{mountpoint:/var/lib/containers/storage/overlay/418915c11f6dd668a4fced69c46e224170743988978ef5e0eef9d3430898d27c/merged major:0 minor:1137 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/fe2ed4ed1da7a817736ed722aa82e07f3c95247d9e29a3d25f74e7517682643e/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-1140:{mountpoint:/var/lib/containers/storage/overlay/106693de137dacf81df0c04b92070ab4572522a42cc69aade901d14332fd5ac3/merged major:0 minor:1140 fsType:overlay blockSize:0} overlay_0-1150:{mountpoint:/var/lib/containers/storage/overlay/1cad029ba5a09f32303a982e4378a5774ad18cac76d86f845b2a909bf5ff09b8/merged major:0 minor:1150 fsType:overlay blockSize:0} overlay_0-1155:{mountpoint:/var/lib/containers/storage/overlay/29b1e3e4e2dc7fbb073391ea391f1547527bf54e9d84e78edea88c2bf12e642d/merged major:0 minor:1155 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/a644459dd1693f7e0b6a814a9f004dc51b2d3bba070c485d31ab6f9db87e1cb7/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1168:{mountpoint:/var/lib/containers/storage/overlay/263ac1d10fb90ff1f3d7d08415e492e8e085f79ceb8c7fd76133901137aac61f/merged major:0 minor:1168 fsType:overlay blockSize:0} overlay_0-117:{mountpoint:/var/lib/containers/storage/overlay/d35c0f2ee390b5d0b85e9556f2e657143d0e9a7c30a7b09c1ec678713960d539/merged major:0 minor:117 fsType:overlay blockSize:0} overlay_0-1177:{mountpoint:/var/lib/containers/storage/overlay/ccf332ecf4e14ac02aa708067d9fe56ea35eae441e9f57eb4bbf1a4d879ae798/merged major:0 minor:1177 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/1bc0f8f47d390997afa9117011219bba5f291974eb499bcaab6a8e4e5fafa031/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-1183:{mountpoint:/var/lib/containers/storage/overlay/f6b914efddfd3360158a5f9277ffd24a165c59b2174edd0c0412d8d1deb5cb94/merged major:0 minor:1183 fsType:overlay blockSize:0} overlay_0-1188:{mountpoint:/var/lib/containers/storage/overlay/00881c581d20a1aac8fd9e689441d429b7ecacf786f36b5415b3fd3afbac5a97/merged major:0 minor:1188 fsType:overlay blockSize:0} overlay_0-119:{mountpoint:/var/lib/containers/storage/overlay/70e984c3e0b06d0c35524d9e6c743ab06fe814fa378a2bad97b94804e5ef5edf/merged major:0 minor:119 fsType:overlay blockSize:0} overlay_0-1190:{mountpoint:/var/lib/containers/storage/overlay/e636bc8c89bbf4eae8e2a1755f4871456871eec1fe53f6f139d69d4934c1cf8d/merged major:0 minor:1190 fsType:overlay blockSize:0} overlay_0-1196:{mountpoint:/var/lib/containers/storage/overlay/73b42d91ab368a4a76becc8d5c0bc0feea3830bf13d857ff69eb84c2fb0c482b/merged major:0 minor:1196 fsType:overlay blockSize:0} overlay_0-120:{mountpoint:/var/lib/containers/storage/overlay/c0a2da6618bb224e14b26a0a4337facbc50a29b08f2639790c05cb59d0573c38/merged major:0 minor:120 fsType:overlay blockSize:0} overlay_0-1201:{mountpoint:/var/lib/containers/storage/overlay/4b99f09ef79201a1d270bdbbe44b390653d449c0c51faac4af411404b1787950/merged major:0 minor:1201 fsType:overlay blockSize:0} overlay_0-1207:{mountpoint:/var/lib/containers/storage/overlay/50c181ab67380fb0b799bbbf59fe4b2f987a0e537eba4df6ee9eadb809c662c4/merged major:0 minor:1207 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/da89f3dcdbed81c8b26e14c08c3d1404ada931ed9183b13ed32c6e0b0821d2b0/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-1215:{mountpoint:/var/lib/containers/storage/overlay/bca1141e9d8314de796e00bfcdf022527b0486f5c90d35c9c9c562c862eacb13/merged major:0 minor:1215 fsType:overlay blockSize:0} overlay_0-1219:{mountpoint:/var/lib/containers/storage/overlay/2bbe827ced0163a6b373e105fb7fd57a1860e1a0cf19499ffd6f9bc5f24f5b47/merged major:0 minor:1219 fsType:overlay blockSize:0} overlay_0-1227:{mountpoint:/var/lib/containers/storage/overlay/7bccd74ef1ada764df9685571cbbbebcddc57f72ae8f984402b8de708c52be40/merged major:0 minor:1227 fsType:overlay blockSize:0} overlay_0-123:{mountpoint:/var/lib/containers/storage/overlay/f5fd635d3f99615210d60854cf454ab9c030088d70c206be2f73898a4ac87eb1/merged major:0 minor:123 fsType:overlay blockSize:0} overlay_0-1236:{mountpoint:/var/lib/containers/storage/overlay/62c04afec8516c4a3765246f847acc144e79e47190240625e64d15b1d5576593/merged major:0 minor:1236 fsType:overlay blockSize:0} overlay_0-1240:{mountpoint:/var/lib/containers/storage/overlay/5389e217c0e304bb813c3f446d91dc746fc468cbc0c69a4b10c75b6a9ffd7879/merged major:0 minor:1240 fsType:overlay blockSize:0} overlay_0-1242:{mountpoint:/var/lib/containers/storage/overlay/c3d6ede98acccdc97988d808879c94a4f2baf9c6e6a127b2f201f38b6ec9267f/merged major:0 minor:1242 fsType:overlay blockSize:0} overlay_0-1244:{mountpoint:/var/lib/containers/storage/overlay/0e474818d8e8122a59a036368dd1c71df7180b09cdfbcc2f42f83d8a76abe38e/merged major:0 minor:1244 fsType:overlay blockSize:0} overlay_0-1246:{mountpoint:/var/lib/containers/storage/overlay/8c49033de77da8618e16fdee4c853369ed8486233902ecfda08408007fbc7c90/merged major:0 minor:1246 fsType:overlay blockSize:0} overlay_0-1252:{mountpoint:/var/lib/containers/storage/overlay/2bc95975f77b92de94625f825fc55f7e051b0e9b61c26274df5119fa84e159ff/merged major:0 minor:1252 fsType:overlay blockSize:0} overlay_0-1257:{mountpoint:/var/lib/containers/storage/overlay/7d1f8950ba89d87f3a13e92c0abfc7d072d02bd9d55b45a395ff08c06f36da2a/merged major:0 minor:1257 fsType:overlay blockSize:0} overlay_0-1259:{mountpoint:/var/lib/containers/storage/overlay/2f35bacbda356558dafc56f36d37f365e197f05195e2e688bf80f1e6219f4373/merged major:0 minor:1259 fsType:overlay blockSize:0} overlay_0-1261:{mountpoint:/var/lib/containers/storage/overlay/a7d8810ad23703064932fb5544b726e7fb3dab72783305e07443f77b478f4d8e/merged major:0 minor:1261 fsType:overlay blockSize:0} overlay_0-1263:{mountpoint:/var/lib/containers/storage/overlay/f13a6392a895608fe5c03871e23d5468776f61265ec2b185412deab899a094cd/merged major:0 minor:1263 fsType:overlay blockSize:0} overlay_0-1269:{mountpoint:/var/lib/containers/storage/overlay/ef0ddc61a8a5191189b356f0166143e7e703fba65908fc5c3f8612d6f4787998/merged major:0 minor:1269 fsType:overlay blockSize:0} overlay_0-1271:{mountpoint:/var/lib/containers/storage/overlay/1f74caf88125d154a7ade1765e22d48bedde171a73fd94eab7fa48b57849e4dd/merged major:0 minor:1271 fsType:overlay blockSize:0} overlay_0-1294:{mountpoint:/var/lib/containers/storage/overlay/cf0978431e141498e7742673a84576a44e40514d1281403ec05825f4cf5fac01/merged major:0 minor:1294 fsType:overlay blockSize:0} overlay_0-1296:{mountpoint:/var/lib/containers/storage/overlay/fb641e94e71301351f854bfb1a378a998f362829323710bf7de4bebb5417c9e2/merged major:0 minor:1296 fsType:overlay blockSize:0} overlay_0-1298:{mountpoint:/var/lib/containers/storage/overlay/d828ab4a0b888e30d68c22b807c947374826d8dd7303b45c449d0d4801442cff/merged major:0 minor:1298 fsType:overlay blockSize:0} overlay_0-1304:{mountpoint:/var/lib/containers/storage/overlay/ba508366176596ad3ca42ec9a8d54031f7eb055ea0f70b9089d39372e44a4288/merged major:0 minor:1304 fsType:overlay blockSize:0} overlay_0-1307:{mountpoint:/var/lib/containers/storage/overlay/3d2c2a47b02a6bfd1e09e7e3b682f11bfaace41deae5d2ac8d98a9fdb4f95260/merged major:0 minor:1307 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/0a33891fe6496ce32e0be8f2ee6718c990f683198d1541ce7e2ae08a4318f273/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-1311:{mountpoint:/var/lib/containers/storage/overlay/c39c73a860d5c94df40977e9517048fc874607300f08e0e4f124993b2c49ed32/merged major:0 minor:1311 fsType:overlay blockSize:0} overlay_0-1313:{mountpoint:/var/lib/containers/storage/overlay/e68b0ac3869afd1a0506d5dd5dc2bd22f2249909bb0b80763daaeec7c0b91d69/merged major:0 minor:1313 fsType:overlay blockSize:0} overlay_0-1322:{mountpoint:/var/lib/containers/storage/overlay/5d41e49b042ef17cff932f2343b73cb1763e87a7f303548c04e536971f429e7e/merged major:0 minor:1322 fsType:overlay blockSize:0} overlay_0-1333:{mountpoint:/var/lib/containers/storage/overlay/975a7700230e10a2b61c2febdbf8770ce372c929a7f6d2e5392a4baddd3401c9/merged major:0 minor:1333 fsType:overlay blockSize:0} overlay_0-1335:{mountpoint:/var/lib/containers/storage/overlay/49afe9427d3975a4393188eba94f8c1323d0c8a89ca1db3ae72f28cf0181168a/merged major:0 minor:1335 fsType:overlay blockSize:0} overlay_0-1337:{mountpoint:/var/lib/containers/storage/overlay/accf9c01e90339f69b1609163f91110d9d017017b0218f356239a6bccb0afc35/merged major:0 minor:1337 fsType:overlay blockSize:0} overlay_0-1348:{mountpoint:/var/lib/containers/storage/overlay/5b01ad79fec3076f60a2dcd830668e1e93dbce813b41e8a54073171833e7579b/merged major:0 minor:1348 fsType:overlay blockSize:0} overlay_0-1355:{mountpoint:/var/lib/containers/storage/overlay/e1239e614d3f65ad1326ce495aba7bbbc99dcd07ce5446f86f3627ef5c362a69/merged major:0 minor:1355 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/d3a64aa3e4789d041a748f6ef87f3e10bf9410bf07bc4b6703e7a24b552fe3b8/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/1bc982c4179cdf65f88f07988c2735edad647e73373d6ce18f1c11186c312100/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/fba8ea7ba0a42f0a33ea2d49b3a5a725461541e4dd702e56691a1bf84e04e6e8/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/04271dbe630a805da29384920d242ac67c7878c9daa9bd665ccb8fc68d8469b0/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/29b486b7a5f95df9f258990ca4a3fa47ee1d268d0d844950e4819a9751e9179c/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/6d1669f22f3208cde8f13a95834afc4d46e0afd5bf0cad3b37bd6211802052db/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/23113a94f03189db81c7b72c50bed722fc4b4759d055c8b9e6d2546dda17b33b/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/13f5260209d18658f4775950eda3971d54191e3ff69449691cb12740a0ac951f/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/ddabac877d0e80ab1e512757f8b4e698445b5ee65ec9300d3dcd9ce2bcb3a303/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/0159301ed4c5ccdccbf68bac5fd4d3104842477be093566eab08c5bcd149a491/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/3221f16cef7fb37f9fbdaa4325eff06d7d7b089218777412c5b5dbf7052a0547/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/c70588ed54a18b38458b3efe694efcc9ae485c78ffaee231714853e183cc64c7/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/c5f95c93d935521d9fc7d8d6bea57fa822e56d5b6f345ae11bf920a4e1aafaef/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-180:{mountpoint:/var/lib/containers/storage/overlay/fb2630a2443a9477de740f5d29121f1e5568fafa8c56bb7ddf12458ae25a0d6d/merged major:0 minor:180 fsType:overlay blockSize:0} overlay_0-183:{mountpoint:/var/lib/containers/storage/overlay/3b10712ffaa52e173ef0855679fe4bac23dd8d6961d46aea212eaadd2f4e7177/merged major:0 minor:183 fsType:overlay blockSize:0} overlay_0-185:{mountpoint:/var/lib/containers/storage/overlay/4a6b1d2e31be6b6e79068575dbbc43716c183e90511eb7320894712df1803469/merged major:0 minor:185 fsType:overlay blockSize:0} overlay_0-190:{mountpoint:/var/lib/containers/storage/overlay/08882a5d6969bb5f7b9bf4aed996f1f7d8d631db157a96d97f96e9501a522fc2/merged major:0 minor:190 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/2a12129fa3609d5841590e73fddef2f172dddb76f4a0fabc2705c7a4c9cbf6bc/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/38a4638b500915016346e40400d2b26f267d53031e6c38c953353c5c5e2150b4/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-205:{mountpoint:/var/lib/containers/storage/overlay/e8ef375f41c9fe2e9d4effd58f4820a921facf775706f0bd37bcfafe4b38ba00/merged major:0 minor:205 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/502508f6d9b3022ffc202085e6527e330fd7fe33db2a8ba53908535715fc38ad/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-211:{mountpoint:/var/lib/containers/storage/overlay/85f2a2cce62ac9c693c3b1e57d54c4f479c1855baf54e3275e7df7ba0317455d/merged major:0 minor:211 fsType:overlay blockSize:0} overlay_0-219:{mountpoint:/var/lib/containers/storage/overlay/a0599cfd5987fbf706159f28f7c1244d43610f337088350bc7ada1382f950864/merged major:0 minor:219 fsType:overlay blockSize:0} overlay_0-223:{mountpoint:/var/lib/containers/storage/overlay/2c440e5e38fff8d6e036d05d7e1e2133d0988d426a171cf21707fdf801f26559/merged major:0 minor:223 fsType:overlay blockSize:0} overlay_0-225:{mountpoint:/var/lib/containers/storage/overlay/79dde471c072739136a1c9f9eec0e8c698f8c239d4a601bde83dee49bb4c92db/merged major:0 minor:225 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/20499207665fb6ee6a74ab5d2ce3dcf3af33f855229eb43c869f78f1289f6c56/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/7ee438d4967843b0666b19743d3d377cd7b15b7a74c1676200df78c776d966a6/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/a5c79a16dbba97e555d8ddb00efddb0b73301170f3091687ee2e6c66eed17b27/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/088bd340f4af26da5560d78b1429512665392b67a379d694c0886e90b7cb9f59/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/e19b90efdb440cb68e60d5adcfb150ec625513134f15ec1743c281aa45244c95/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/875693b97ff970cb9637e06c564318400127efba8ea2a7d76fa8b6eff8c88b94/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/589cb25e5e8adf1b16fb76498e5fd1692dfa9965e2426d8dfd21c8ba49336ba2/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/69ceae353ea73dcfd45b733cd8533a98cbd656d2495985ce5d58f811cbaf0eeb/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/b30dd7e1caf13503f380177009bd24d6802756a59d5ac1c0429183192a3c2d9c/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/b7ba16ad4d6f3a22b1258f4543166039debd0aa6295e854732b1c8f05bc5cb13/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/498fa5967005e1f93e16c9e65e773b5937b3d1c738c74c3600643c6a21764f8d/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/88aeb26dfb57b3a33f7eec20be4c378e2c53279b7dc56c4d545ee42b1fe0a18a/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/25dfb27567ea86ed71e9e952b356f76b4132664598eebd0718ea42fd94156dc2/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/aad7a84d065d38c67b9518cbabe3aa341433477b7a4980eaf75436c7bd649e49/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/a76f6dc6c95ae3b93f05ba178facd50795e620ad8e9096705f734c271e54e939/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-331:{mountpoint:/var/lib/containers/storage/overlay/7fe7d495779934cd716f67b6ff87425b7d2cfb2df5ec4c882c4fd3f624c964c0/merged major:0 minor:331 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/5f4563399927125a4b3d6ef73956e4c8eb8cd06c12eefc6c1241d40ef6778a11/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-337:{mountpoint:/var/lib/containers/storage/overlay/2670eddcb449e75ed89070d898331615470ac6dc12cf5c3e520cb61a503d2d2b/merged major:0 minor:337 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/172ca81bea72c2a3b2d286a87d340b20366a432eac8852eb9487b7df262566dc/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-347:{mountpoint:/var/lib/containers/storage/overlay/1ca0e51db786fe11c69a11cec47c30b3d0b48ba53b29625b7587e64cd0ec47e0/merged major:0 minor:347 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/3047f1fa73920ee35001b52c787a055bff82d8f293864a2025a28a688df3a834/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/c47652548c27a7da9bac28514220cb44863a132d41f1b543ab20eba4261ff5d8/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-354:{mountpoint:/var/lib/containers/storage/overlay/20ead9d4cdb9d37ab3ee692f945e93f74cd39a9ff740f42568483ad913c99d47/merged major:0 minor:354 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/0b7f4b3604b8113f18169ecfc2afb5d6496e4b5fc520b1cd6a8cadb830390d3c/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/4f72a6e6b037d6953fc83a17042af9725376e5004247aa994cb99d4e1c62a4f0/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-366:{mountpoint:/var/lib/containers/storage/overlay/f756ca47e014a2297fffc2ea42e86b299c3744cbd2dc8a08636b23f574bc3591/merged major:0 minor:366 fsType:overlay blockSize:0} overlay_0-372:{mountpoint:/var/lib/containers/storage/overlay/ef71da991ad675b5d264d2ec1b85d6a99afed7e3b53dbaa4839547f385990cfa/merged major:0 minor:372 fsType:overlay blockSize:0} overlay_0-374:{mountpoint:/var/lib/containers/storage/overlay/3a74d727af9ddad50df2fef6cf31cd891e7ccbd96d2ad2d035d9f8cfa669f05f/merged major:0 minor:374 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/4e1fd94d48e9d8c1a0ab1c2210a97903babe8c311efe3a6ffcf45ac5db4f0242/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/11969042a211ce90428b183ffe91aeb1ff8467faba346aaf52571aaeeeda01c6/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-380:{mountpoint:/var/lib/containers/storage/overlay/ecd73182ae9c45539415558364bb37d0b055674a1004c95b8eb47b3e56c1ba04/merged major:0 minor:380 fsType:overlay blockSize:0} overlay_0-385:{mountpoint:/var/lib/containers/storage/overlay/057bafa10a412b7f8e0dae13273bc4c8edca3cacc7bd7e77c6df16d2e3340bed/merged major:0 minor:385 fsType:overlay blockSize:0} overlay_0-388:{mountpoint:/var/lib/containers/storage/overlay/a2b8afc1df1a1512ebe046ba1996a2b27102b91cb6abe3ec5a5e3377eac66c82/merged major:0 minor:388 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/84f568d01fa2498ec5a5f11c4fad685d3c252b0a636ec6701a128eedbd83f565/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/704565117b0e1ed980e6d974fa1397bd8840cb2dac79cd8f2271dc79952bef22/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-393:{mountpoint:/var/lib/containers/storage/overlay/8f26ed1cf7d7a81c8aaa25429decd21e0555c4268237e947ff19161ac37f78e1/merged major:0 minor:393 fsType:overlay blockSize:0} overlay_0-394:{mountpoint:/var/lib/containers/storage/overlay/1416860fd93a403bb8f19aca0d508492652113e82b77168abe2cd56b11079cfe/merged major:0 minor:394 fsType:overlay blockSize:0} overlay_0-397:{mountpoint:/var/lib/containers/storage/overlay/91510ae6ec8b4a860275793f8593ffc7102edc733beeef4cd43a07d61d5f715e/merged major:0 minor:397 fsType:overlay blockSize:0} overlay_0-399:{mountpoint:/var/lib/containers/storage/overlay/c6c183f1c3661ee2c4a01e2907a594d5a3595e9bdace1853614f39423c51080e/merged major:0 minor:399 fsType:overlay blockSize:0} overlay_0-402:{mountpoint:/var/lib/containers/storage/overlay/95d2141ebea0e1b2d6cd95879ab9146f7775710cb7108f9570492ca727241a2b/merged major:0 minor:402 fsType:overlay blockSize:0} overlay_0-410:{mountpoint:/var/lib/containers/storage/overlay/ceefaa0a5a5e7739587a4bb3b900735c85bb684f3bfc6d3a10338141706ef487/merged major:0 minor:410 fsType:overlay blockSize:0} overlay_0-413:{mountpoint:/var/lib/containers/storage/overlay/ff094f0d7abc7acbf6b764f8e8b41311002b4851bce4a1136c46e4bc2b9cc36f/merged major:0 minor:413 fsType:overlay blockSize:0} overlay_0-422:{mountpoint:/var/lib/containers/storage/overlay/3cda618e6c6f9053103d572544b39a0525424e3383521e3c6494fbeb28da78a5/merged major:0 minor:422 fsType:overlay blockSize:0} overlay_0-428:{mountpoint:/var/lib/containers/storage/overlay/92bea4de92fc5d77364658aa130dedeb9af3d913eee2df63ec85a7215306b9fb/merged major:0 minor:428 fsType:overlay blockSize:0} overlay_0-432:{mountpoint:/var/lib/containers/storage/overlay/c0ae5324dfcedf995839bf9ffe2c35cb818eae9f1838d6084295b4962164bed9/merged major:0 minor:432 fsType:overlay blockSize:0} overlay_0-434:{mountpoint:/var/lib/containers/storage/overlay/64ccd9e7caf70abffd82f3aef2ebb8f2ad604c9039b0c00ed20d49e390c7a3d1/merged major:0 minor:434 fsType:overlay blockSize:0} overlay_0-436:{mountpoint:/var/lib/containers/storage/overlay/1dbfe38ee607e70c34b1bc05b6f19b8b1807bc0f1d6a77fb3ea316bd52b21e91/merged major:0 minor:436 fsType:overlay blockSize:0} overlay_0-438:{mountpoint:/var/lib/containers/storage/overlay/11dfd5b8f61ad387602daffaadbf2ff612451108a97bb7c22d9bfc0fdf23e481/merged major:0 minor:438 fsType:overlay blockSize:0} overlay_0-444:{mountpoint:/var/lib/containers/storage/overlay/0ca473755328818b00c2d1edf890edfa29239a38cbd454dc6c912b2824981cb7/merged major:0 minor:444 fsType:overlay blockSize:0} overlay_0-458:{mountpoint:/var/lib/containers/storage/overlay/29edce955cc6354b3982297097835781c3fa81c9d3f2a72baaaed09c7a83e3c3/merged major:0 minor:458 fsType:overlay blockSize:0} overlay_0-464:{mountpoint:/var/lib/containers/storage/overlay/f51024e1e1a341b114f7b044309be43bf14a8c8de14435fe9572e16145547590/merged major:0 minor:464 fsType:overlay blockSize:0} overlay_0-467:{mountpoint:/var/lib/containers/storage/overlay/b65cf0f33b70d5a9372ed118fb0e2e3ccff788c74171fa84146b0c4fe4b2a985/merged major:0 minor:467 fsType:overlay blockSize:0} overlay_0-469:{mountpoint:/var/lib/containers/storage/overlay/a1274bc0a89abde02c5c25e333f0d92f92672bbf5ff5e3f02f35016cc7c69725/merged major:0 minor:469 fsType:overlay blockSize:0} overlay_0-472:{mountpoint:/var/lib/containers/storage/overlay/c12998f8851c3117bc5d84c530a340e53b839d0ea6120041ed4d5d44048e9904/merged major:0 minor:472 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/095acba55455e38ef6efc06837032e254d145da8858d0f8096d357b9350eb5dd/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-480:{mountpoint:/var/lib/containers/storage/overlay/292477f57a08b12071b6c8d27ed11b10f4bd846797000b6dce8e6bf00a623625/merged major:0 minor:480 fsType:overlay blockSize:0} overlay_0-484:{mountpoint:/var/lib/containers/storage/overlay/52657916e265b12c004f39763185515d6a094b50c5277a7722d7c4d3cdce0c31/merged major:0 minor:484 fsType:overlay blockSize:0} overlay_0-486:{mountpoint:/var/lib/containers/storage/overlay/d8b341f2936dddd7a04013c9d16e850a1713fee632506cc13213ce117de9b717/merged major:0 minor:486 fsType:overlay blockSize:0} overlay_0-489:{mountpoint:/var/lib/containers/storage/overlay/f3fa50ab8edc3da567b66c90010e60b4d913967e45e7adc8ab218649dd4bc280/merged major:0 minor:489 fsType:overlay blockSize:0} overlay_0-494:{mountpoint:/var/lib/containers/storage/overlay/4f28cb4d71a678606a1f825d4808e6d97231e6f846759ae8d99c3ebbd384d8bd/merged major:0 minor:494 fsType:overlay blockSize:0} overlay_0-500:{mountpoint:/var/lib/containers/storage/overlay/936f9eb5d00fcaa90dcb3bd9d2580e8af13ebfa584e380c0a758d1f156dc7b04/merged major:0 minor:500 fsType:overlay blockSize:0} overlay_0-504:{mountpoint:/var/lib/containers/storage/overlay/57743f13209645f686c0b5aa23c8c6e6c12f8f7f6740bdf037ca20434715dc19/merged major:0 minor:504 fsType:overlay blockSize:0} overlay_0-506:{mountpoint:/var/lib/containers/storage/overlay/a88b399d1d6b24692942d738a2392b583c01a1b4410bfc5cd6f7fea3ae52d593/merged major:0 minor:506 fsType:overlay blockSize:0} overlay_0-512:{mountpoint:/var/lib/containers/storage/overlay/bf9ebf8bc333ea3d7ee41d98911bc65dd0c8fc0ab5a5741658bf1e41bd0045c2/merged major:0 minor:512 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/256dabb1a2b98012081ebe937b2f86b4a1ac17764be05b2a7394bfd2a5654446/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-529:{mountpoint:/var/lib/containers/storage/overlay/46cf362bd37f0d58e6d9e1f80cc663df0968ef4ea5cff46e188e89e6ecf34372/merged major:0 minor:529 fsType:overlay blockSize:0} overlay_0-538:{mountpoint:/var/lib/containers/storage/overlay/19c9322af32a3bea23229f8e43d581dabb805b9913f6ac11d73765e2ac77a4da/merged major:0 minor:538 fsType:overlay blockSize:0} overlay_0-541:{mountpoint:/var/lib/containers/storag Feb 16 21:22:49.732285 master-0 kubenswrapper[38936]: e/overlay/c37b25bab2df767c904e8f7fe7c364fc2524124a146af68810afde1898bcc051/merged major:0 minor:541 fsType:overlay blockSize:0} overlay_0-543:{mountpoint:/var/lib/containers/storage/overlay/433bbbd22f613f02989a09ce3e478bd37a6bec2690a6af5f106f6fa2cf2831c9/merged major:0 minor:543 fsType:overlay blockSize:0} overlay_0-546:{mountpoint:/var/lib/containers/storage/overlay/8d136d8d2a75ebbdfc08a5f312b9cc22e7b109005a6645e67b72e5db55ab2c3e/merged major:0 minor:546 fsType:overlay blockSize:0} overlay_0-548:{mountpoint:/var/lib/containers/storage/overlay/bf91088abc98c43df599444ac497ceb5a51d5105f411f44011a36ae8e791c354/merged major:0 minor:548 fsType:overlay blockSize:0} overlay_0-554:{mountpoint:/var/lib/containers/storage/overlay/fbef140fa93378ef8aa486cfca35f04bc0383a7085ef5e4ac722567c2708a1e9/merged major:0 minor:554 fsType:overlay blockSize:0} overlay_0-565:{mountpoint:/var/lib/containers/storage/overlay/473b9ea71a1f174c9d6a1b7773159398b31798e042ac48b52bae6f80cc3c0402/merged major:0 minor:565 fsType:overlay blockSize:0} overlay_0-567:{mountpoint:/var/lib/containers/storage/overlay/19e5ed1fcefd1af78fbf749fbe793177c4926f4c8a4a5eeb35765a33e0581a43/merged major:0 minor:567 fsType:overlay blockSize:0} overlay_0-574:{mountpoint:/var/lib/containers/storage/overlay/78ec867665180fc2f37e5b403915fcfd53c32509406ee3b66a3196a98058da91/merged major:0 minor:574 fsType:overlay blockSize:0} overlay_0-583:{mountpoint:/var/lib/containers/storage/overlay/167436e387fbadf2ddb33245ed52a962890959003b4d30bac9bcbb08c6312d5f/merged major:0 minor:583 fsType:overlay blockSize:0} overlay_0-585:{mountpoint:/var/lib/containers/storage/overlay/e5cd3be6af83b234a334468346ef06f8deeb1405c5d77241655a584ec1666c1f/merged major:0 minor:585 fsType:overlay blockSize:0} overlay_0-587:{mountpoint:/var/lib/containers/storage/overlay/2a0b386f64e5b3a9b5f2dafecaaece7de7ab53a43a7e50032e06747253276ca6/merged major:0 minor:587 fsType:overlay blockSize:0} overlay_0-592:{mountpoint:/var/lib/containers/storage/overlay/26ab7177754768e2b4a06ebadf908f0c28968fd80993cb5a09cc4ea8d056e625/merged major:0 minor:592 fsType:overlay blockSize:0} overlay_0-595:{mountpoint:/var/lib/containers/storage/overlay/f67b57301cd4a1a1b459955f27b925208f5a8b6275bce4d75ea63a61e3b3e32e/merged major:0 minor:595 fsType:overlay blockSize:0} overlay_0-596:{mountpoint:/var/lib/containers/storage/overlay/98d7de29c5dabb328bca4c93ad46bc92ea5542fb83bddbabd726612e11d8fbfb/merged major:0 minor:596 fsType:overlay blockSize:0} overlay_0-598:{mountpoint:/var/lib/containers/storage/overlay/284c49e7a1b5f9c63c196cf5763ff7049fb67b6805af3445d6769db5c7f84646/merged major:0 minor:598 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/07e5ae6eb30bc72587015163cf6be035af8bc6397b5f57a9222c6d4c2d29c781/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-601:{mountpoint:/var/lib/containers/storage/overlay/85fc19ffcb9a19b4ed7149fb7525c920237c60521302796b3eb839b43c0948b7/merged major:0 minor:601 fsType:overlay blockSize:0} overlay_0-605:{mountpoint:/var/lib/containers/storage/overlay/8479b8b3565518897b53935aa0c93fe377cc9a40972f4df638f0742756682039/merged major:0 minor:605 fsType:overlay blockSize:0} overlay_0-608:{mountpoint:/var/lib/containers/storage/overlay/96b902795aeac8523c2091071f8de5d948bed2813da07a0926dc0cded5405b9b/merged major:0 minor:608 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/1fd0ce36fb5784312efa9ea367be24df641b985358e4a3a05fbb874c80e19002/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-631:{mountpoint:/var/lib/containers/storage/overlay/ee82c0adca8cd7d43a9ebe83bb3ca0af00c392bf458c3c3f1b18dcc46817f36a/merged major:0 minor:631 fsType:overlay blockSize:0} overlay_0-637:{mountpoint:/var/lib/containers/storage/overlay/55207fce0e94def57e7960a927a49967ab24340d4937e8d7be70d8c1eee10d8b/merged major:0 minor:637 fsType:overlay blockSize:0} overlay_0-639:{mountpoint:/var/lib/containers/storage/overlay/a83c80ff90f76303666438b6e73e7797927403a33fdbc102d9273d6dd5edf9ad/merged major:0 minor:639 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/2c5a403792e24f7d1a2c884c70c0c71d258ee1a9387bd25d48d0c9f7b483ee1a/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-647:{mountpoint:/var/lib/containers/storage/overlay/7782e0cf0864b5cb6eca77119acf57868275cfd8c7cd1c0aba7712c83ad22a59/merged major:0 minor:647 fsType:overlay blockSize:0} overlay_0-651:{mountpoint:/var/lib/containers/storage/overlay/cfafcb012f7d72d501690e89fc3d0ae5234ff9c58144cbdac3039a64081deb7b/merged major:0 minor:651 fsType:overlay blockSize:0} overlay_0-656:{mountpoint:/var/lib/containers/storage/overlay/2a1690802fb2f974323abb0298647942aee8cd7c853750e8d1540df2feb0eff5/merged major:0 minor:656 fsType:overlay blockSize:0} overlay_0-657:{mountpoint:/var/lib/containers/storage/overlay/7fb1566b8026f81779c3e77216922e98c6b4dfe416b55bad4776805271d8ee1a/merged major:0 minor:657 fsType:overlay blockSize:0} overlay_0-659:{mountpoint:/var/lib/containers/storage/overlay/0139164fd89f7262310b90818e2e91c6930da845f840cb92ff4e840f188dac33/merged major:0 minor:659 fsType:overlay blockSize:0} overlay_0-663:{mountpoint:/var/lib/containers/storage/overlay/cac5efa85844c0d6f03594fe8b42325c0d3c84cfbcee92610cecd57c7bd9cf8b/merged major:0 minor:663 fsType:overlay blockSize:0} overlay_0-664:{mountpoint:/var/lib/containers/storage/overlay/4eaf92a602c5ed23a9c4a3df7bdf9c29ac25e07a06fb50b8945047cf08bf5cec/merged major:0 minor:664 fsType:overlay blockSize:0} overlay_0-670:{mountpoint:/var/lib/containers/storage/overlay/7f100a5e32e9bac277d14b02d77cf894ce78cb0f6c66ac6f3e394c2a20c14bb9/merged major:0 minor:670 fsType:overlay blockSize:0} overlay_0-672:{mountpoint:/var/lib/containers/storage/overlay/85cf10753ca1d52205bcf30b8e12278ec399c7f55ea9205672b9ef712274e354/merged major:0 minor:672 fsType:overlay blockSize:0} overlay_0-674:{mountpoint:/var/lib/containers/storage/overlay/25f15ad41a8799412a1b15c8eb9342a11bc173d4b28cf640dca1db70a8122fd0/merged major:0 minor:674 fsType:overlay blockSize:0} overlay_0-679:{mountpoint:/var/lib/containers/storage/overlay/7d06eee8c0a98f3821b502d6122f54fbd8e11be6eb6a08988b88759f894ea62d/merged major:0 minor:679 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/f141d83423abbfe01ca3c93104d1022ac9d1d249291467ec73b09b9023372d70/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-693:{mountpoint:/var/lib/containers/storage/overlay/c6b39496b80a71c3c7c65454259506e3ca0f5906dac4e19d2ff48aa30e19d952/merged major:0 minor:693 fsType:overlay blockSize:0} overlay_0-698:{mountpoint:/var/lib/containers/storage/overlay/6b2e82f490ca359883c4d03878bdcb648f67a4843ffd44aaa5f0e757f3df75fc/merged major:0 minor:698 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/d4113b059cc39751d54e6a39fcf8e58b0c884c9d15e77603fcc33d81d39b7eec/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-700:{mountpoint:/var/lib/containers/storage/overlay/dc18f8eaa0fde4a0ca057e7094c79aac1df927b89663056c215b039a0c013c29/merged major:0 minor:700 fsType:overlay blockSize:0} overlay_0-702:{mountpoint:/var/lib/containers/storage/overlay/c5aea53e9e2ffad6bac2f453cfaf5641ea7252d67cf575f7836ac6b0f01d016d/merged major:0 minor:702 fsType:overlay blockSize:0} overlay_0-710:{mountpoint:/var/lib/containers/storage/overlay/cb324f6eed23a6a0683e48f754ff4c3bd12b08bcc8a0fa2438ccde353a8b7e4c/merged major:0 minor:710 fsType:overlay blockSize:0} overlay_0-711:{mountpoint:/var/lib/containers/storage/overlay/6d0590d7cf6239c1b63a2925d594747bfb09a83664335acd995d8f252dfd7777/merged major:0 minor:711 fsType:overlay blockSize:0} overlay_0-716:{mountpoint:/var/lib/containers/storage/overlay/272960bf8944ef516fd51b60f02d2ffbd4baafe1a2f2bdd217a617c8e8be9e64/merged major:0 minor:716 fsType:overlay blockSize:0} overlay_0-726:{mountpoint:/var/lib/containers/storage/overlay/657273f224203de540574fb6633f55340101104167c9c709677f7db16bfa0426/merged major:0 minor:726 fsType:overlay blockSize:0} overlay_0-749:{mountpoint:/var/lib/containers/storage/overlay/495fc2bc343a501950fdbd7eca07923a3ce538d3ebaab077fe61b7e800bc4b86/merged major:0 minor:749 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/9da94fe156c2fa8ff37554c69ab04b7544caf762175e72b4edae3a43a5997974/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-751:{mountpoint:/var/lib/containers/storage/overlay/4172d27c322241bb390d0f2e8ecae804ec6586c07c9e0df03483f18a144c67a7/merged major:0 minor:751 fsType:overlay blockSize:0} overlay_0-753:{mountpoint:/var/lib/containers/storage/overlay/055a4bd4088f2988f9baff738ccc7b62658aab0aac3fbdc8c55a70640e96d5fd/merged major:0 minor:753 fsType:overlay blockSize:0} overlay_0-755:{mountpoint:/var/lib/containers/storage/overlay/7c4d07087562b5e4f1a9f8d71cf8581347a5f60ab06ec5a6bf1ea7c608212b96/merged major:0 minor:755 fsType:overlay blockSize:0} overlay_0-757:{mountpoint:/var/lib/containers/storage/overlay/75abad0fb232ee97e3276b25c7d5d0924a1daf97fdb66a7f95b2fd22763450ff/merged major:0 minor:757 fsType:overlay blockSize:0} overlay_0-759:{mountpoint:/var/lib/containers/storage/overlay/4d9d3b50edfde6ec3541292ac6252fb58a06ea3ef2f5714dfdffdd6eb20dad2e/merged major:0 minor:759 fsType:overlay blockSize:0} overlay_0-765:{mountpoint:/var/lib/containers/storage/overlay/7f1c075449ab13bd5946e2e0f272bd8dc065da8ed1365989f005cae64b7e3ff7/merged major:0 minor:765 fsType:overlay blockSize:0} overlay_0-767:{mountpoint:/var/lib/containers/storage/overlay/3a1c50564d5515e7c466511bd609f1753af7de87cae787482dfb65163ab21fa5/merged major:0 minor:767 fsType:overlay blockSize:0} overlay_0-769:{mountpoint:/var/lib/containers/storage/overlay/533298e290adf78a62cac023d07eea47680b8ea736375f415aabc4e37fbd6a8f/merged major:0 minor:769 fsType:overlay blockSize:0} overlay_0-772:{mountpoint:/var/lib/containers/storage/overlay/39e91ff7933bdf90f257ee6174fd6666108d27b1b5428681643b2fb064e97233/merged major:0 minor:772 fsType:overlay blockSize:0} overlay_0-774:{mountpoint:/var/lib/containers/storage/overlay/fdd711bf7a985934b60d1c0c045c63bc31c369f5631e58be4da2e61ab4d7b959/merged major:0 minor:774 fsType:overlay blockSize:0} overlay_0-778:{mountpoint:/var/lib/containers/storage/overlay/3072e6f9aa472b4e3235b878a920cc6b30f5fa8e8bd6bbd743e2ad0e2ae005da/merged major:0 minor:778 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/2085b2c5010f743b820d4dca387649880e23b50e486289df170e736cd724415c/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-798:{mountpoint:/var/lib/containers/storage/overlay/f56feaa69e2b206ec0b832286f0d9de103dbac40586a3b291a8e6c622101865e/merged major:0 minor:798 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/8f5f29c6c89d4769707e437fbe678bc3e3dd30083ade63f725d2eab36c256ffd/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-808:{mountpoint:/var/lib/containers/storage/overlay/bf02e7117b1d073a06c0b2c31c52633b11e9cd9a858871836e6e395e083b7ad5/merged major:0 minor:808 fsType:overlay blockSize:0} overlay_0-811:{mountpoint:/var/lib/containers/storage/overlay/64184c9145344d45552377b5994f3914dbf8ed656d02707b9a233c3aa36a75cb/merged major:0 minor:811 fsType:overlay blockSize:0} overlay_0-814:{mountpoint:/var/lib/containers/storage/overlay/a732d4033446e3d135bca0f3a9a230ae6ba0acb941cc3b1cce0f977f39f5bf7c/merged major:0 minor:814 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/80f8e6059f4db1024f4fb3cba9d2004628cac67256d1327879ea056b8a5a641d/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-825:{mountpoint:/var/lib/containers/storage/overlay/e4272c053bd0363c15848c4a6c8092c7a65073a7787730c6c903e8939306c184/merged major:0 minor:825 fsType:overlay blockSize:0} overlay_0-832:{mountpoint:/var/lib/containers/storage/overlay/86556c238c89d5f2b8a6630d3be8fdb2b10927b5be3a61228a3140c8f9a4667c/merged major:0 minor:832 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/60462659bbc8725afcb7cf446509b06de606074d5cb8a8f155501287175be6c8/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-841:{mountpoint:/var/lib/containers/storage/overlay/d355a6028e12a5d695acee7d76386dd2707c3d91a5e2e30b09a3d03ffe7abd5f/merged major:0 minor:841 fsType:overlay blockSize:0} overlay_0-843:{mountpoint:/var/lib/containers/storage/overlay/4b0e731154dfe1bf9aa419f551d352f11fd892a82e0385a353f38740be6fd20f/merged major:0 minor:843 fsType:overlay blockSize:0} overlay_0-847:{mountpoint:/var/lib/containers/storage/overlay/1c8b8cc9cd5c862f71fe536525f7e361e33363ed9b084890ed1fb781d8514383/merged major:0 minor:847 fsType:overlay blockSize:0} overlay_0-848:{mountpoint:/var/lib/containers/storage/overlay/a21a869a0866b562ca760e184db94f58d63d55ce5ac869f1194f5e6026241ebc/merged major:0 minor:848 fsType:overlay blockSize:0} overlay_0-850:{mountpoint:/var/lib/containers/storage/overlay/9d892ff5d26fffab228a4e9d0cd81be2503b423d51cdbfb66a4c8eb5d58b2ad2/merged major:0 minor:850 fsType:overlay blockSize:0} overlay_0-851:{mountpoint:/var/lib/containers/storage/overlay/4efd5707c7e0c368d1b3c0a5d2af10db1df330fa60fa535395c155878abe2f19/merged major:0 minor:851 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/c054ccef051730fe081a44c8ad5c358a8a6e627a1fc80c3cf8c53deafd5f771f/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-866:{mountpoint:/var/lib/containers/storage/overlay/12b32c1a23b749b0a241f29ed5b2a93cb9e3291784e7f7c093422784f048457e/merged major:0 minor:866 fsType:overlay blockSize:0} overlay_0-871:{mountpoint:/var/lib/containers/storage/overlay/a00f7b5afe790aabb5389e1efc992f63416e3feb9d3a11558747def14ac98033/merged major:0 minor:871 fsType:overlay blockSize:0} overlay_0-873:{mountpoint:/var/lib/containers/storage/overlay/19b5c4bbbf9c0699cc84d61ff3b81b4cdf6991f4bba021f6170526bbc11ca3bb/merged major:0 minor:873 fsType:overlay blockSize:0} overlay_0-877:{mountpoint:/var/lib/containers/storage/overlay/e90da7befb256b4059f7154999acd6fe3b89417be8d457b6004babf76486a319/merged major:0 minor:877 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/fa5666d7a739985c70188c56cb82baa4f421145527d417e27721d891f2b0505e/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-883:{mountpoint:/var/lib/containers/storage/overlay/99e42aceb311f77a1b0b320c1115724e1705c6effa296c26c336601ff75d9329/merged major:0 minor:883 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/86e869319b03d63b84bffd0cb93315c0b38c93385ec9cabfac982137cecabae6/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-897:{mountpoint:/var/lib/containers/storage/overlay/67a7306b556dc4cefc49bab5f578cc80a852b04c2e5090be1239e0f3a0746a85/merged major:0 minor:897 fsType:overlay blockSize:0} overlay_0-900:{mountpoint:/var/lib/containers/storage/overlay/4cc251238feb1e3980e8191d99527e05f3b6bc2b83ebdd598761be4628ed3517/merged major:0 minor:900 fsType:overlay blockSize:0} overlay_0-901:{mountpoint:/var/lib/containers/storage/overlay/9e6059a97ff77fb3d868e0f14620b314afb99443395d5bc84444bb22eedc1156/merged major:0 minor:901 fsType:overlay blockSize:0} overlay_0-907:{mountpoint:/var/lib/containers/storage/overlay/6352f5f38128960010a70fd92ea890bbd5cfeadf361267d04014944c088aea3f/merged major:0 minor:907 fsType:overlay blockSize:0} overlay_0-928:{mountpoint:/var/lib/containers/storage/overlay/ec64792a4defd568a8a07834a63e63f3280475ce1959fd2f179f4e675386ae07/merged major:0 minor:928 fsType:overlay blockSize:0} overlay_0-934:{mountpoint:/var/lib/containers/storage/overlay/2895c5bed8357fe9e6d5f5b95650c12a50426dcb30e5da09ed7240cb2f8bb913/merged major:0 minor:934 fsType:overlay blockSize:0} overlay_0-937:{mountpoint:/var/lib/containers/storage/overlay/4f562e844921f82a0df9b7d537374cbd02c4479fa82d84fb19db71cdaa2f67cc/merged major:0 minor:937 fsType:overlay blockSize:0} overlay_0-938:{mountpoint:/var/lib/containers/storage/overlay/13f4f43fc03abe38a7ad3710eedc43d65445d6556f6c9521852773854809bc1b/merged major:0 minor:938 fsType:overlay blockSize:0} overlay_0-940:{mountpoint:/var/lib/containers/storage/overlay/58a4ff006268055dd938138bced109c366d81a68f8975dc597a88440fffbadb7/merged major:0 minor:940 fsType:overlay blockSize:0} overlay_0-942:{mountpoint:/var/lib/containers/storage/overlay/39d9ae8ec0abb501ab1ced3468354b7996429c1afc3da61eac64db8cfb8b6608/merged major:0 minor:942 fsType:overlay blockSize:0} overlay_0-944:{mountpoint:/var/lib/containers/storage/overlay/94bc31ec5d3683c99bbe19da1d9970693309e318e2a8baa88f7b8708f4cd6efe/merged major:0 minor:944 fsType:overlay blockSize:0} overlay_0-946:{mountpoint:/var/lib/containers/storage/overlay/fe3e02b6bd7bf225ae9c08aa86489a31f65edbf04381d554d00aabf62983062d/merged major:0 minor:946 fsType:overlay blockSize:0} overlay_0-970:{mountpoint:/var/lib/containers/storage/overlay/da3bf768c72842a6937e17a86e732b43d6dc3d249bb86126536f8fb4266cead5/merged major:0 minor:970 fsType:overlay blockSize:0} overlay_0-972:{mountpoint:/var/lib/containers/storage/overlay/098aa4bf447622662c3f572d0b478e78d82e28212ac10a7e95ebaea42d5292b1/merged major:0 minor:972 fsType:overlay blockSize:0} overlay_0-978:{mountpoint:/var/lib/containers/storage/overlay/98ff5d946a95071bdf10c96dc2baeb234c9e42c2e004d9c6884de0a2bc8ef412/merged major:0 minor:978 fsType:overlay blockSize:0} overlay_0-980:{mountpoint:/var/lib/containers/storage/overlay/60e22fdf90267914d04803dad40f88fd0142531ef4189f9a66362bac86100174/merged major:0 minor:980 fsType:overlay blockSize:0} overlay_0-983:{mountpoint:/var/lib/containers/storage/overlay/f8ff9a0fe7e85e237d6a54ddb53889e6f02c85e4b72767507756a09c1d8bfd29/merged major:0 minor:983 fsType:overlay blockSize:0} overlay_0-990:{mountpoint:/var/lib/containers/storage/overlay/b8ad3d837ca3a555a3774301c15fe01d12ac077a84a9bec170b6de889b142c4b/merged major:0 minor:990 fsType:overlay blockSize:0} overlay_0-992:{mountpoint:/var/lib/containers/storage/overlay/ed399cea33dfed4c2cf1c98a12ad62a681c5cf576eef3c030cc11e55255af1e4/merged major:0 minor:992 fsType:overlay blockSize:0} overlay_0-993:{mountpoint:/var/lib/containers/storage/overlay/750d876e4098609ee82be870ea2708771a07ec389fbb07bdbe51fda0b8aee048/merged major:0 minor:993 fsType:overlay blockSize:0} overlay_0-995:{mountpoint:/var/lib/containers/storage/overlay/88de1ae5ea9196f4827b7a9765d0479ed421d2dc0e774bbbdedf7cfeb776ac3f/merged major:0 minor:995 fsType:overlay blockSize:0} overlay_0-998:{mountpoint:/var/lib/containers/storage/overlay/54a977fe7496870e2cd4b4b8e925f2f35559afc67b8a4d29ade448a6babc498e/merged major:0 minor:998 fsType:overlay blockSize:0}] Feb 16 21:22:49.774424 master-0 kubenswrapper[38936]: I0216 21:22:49.772850 38936 manager.go:217] Machine: {Timestamp:2026-02-16 21:22:49.771889935 +0000 UTC m=+0.123893307 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:5734463887a64126b7ce9bf415a88e99 SystemUUID:57344638-87a6-4126-b7ce-9bf415a88e99 BootID:547c3926-fc12-480e-89e3-8f59492f672a Filesystems:[{Device:/var/lib/kubelet/pods/ff193060-a272-4e4e-990a-83ac410f523d/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:619 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1008 DeviceMajor:0 DeviceMinor:1008 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1294 DeviceMajor:0 DeviceMinor:1294 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-438 DeviceMajor:0 DeviceMinor:438 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/74e6be5033443384ea4bd5754c8e506826ab77e1e025ae4e7b5a3735350d70f2/userdata/shm DeviceMajor:0 DeviceMinor:932 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/db18d33d279edf734f31d955c318fccdcbf15241593b0786bf92a199ab2a428f/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0d903d23-8e0b-424b-bcd0-e0a00f306e49/volumes/kubernetes.io~projected/kube-api-access-kcp5t DeviceMajor:0 DeviceMinor:433 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d2b7935cea946c9f051bb808d0bcec166c533127cc006510308f2ece80cabd7f/userdata/shm DeviceMajor:0 DeviceMinor:839 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-432 DeviceMajor:0 DeviceMinor:432 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/abcd1a63f33b879c154e1f80fc5ea3f4b46d9d1e7d2159b6ce5ac662b670e5ff/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1236 DeviceMajor:0 DeviceMinor:1236 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1050 DeviceMajor:0 DeviceMinor:1050 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-811 DeviceMajor:0 DeviceMinor:811 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:263 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ad196ac4d2e3966bfb26599fb699f9a38a58beb4f2a551485dd0f16fe14d30d3/userdata/shm DeviceMajor:0 DeviceMinor:90 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ba294358-051a-4f09-b182-710d3d6778c5/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:975 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d8d648c7-b84b-4f43-84c9-903aead0891a/volumes/kubernetes.io~projected/kube-api-access-nq9c5 DeviceMajor:0 DeviceMinor:45 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d3647391d6c6aea748cff19ab3829b4c4308cc4ee2ef9a5eb37149acfef03e2f/userdata/shm DeviceMajor:0 DeviceMinor:492 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-512 DeviceMajor:0 DeviceMinor:512 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0855efbb779255fb187bac22b944f8f2035fd58838e6517844db44571c397aae/userdata/shm DeviceMajor:0 DeviceMinor:578 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-587 DeviceMajor:0 DeviceMinor:587 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-402 DeviceMajor:0 DeviceMinor:402 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1124 DeviceMajor:0 DeviceMinor:1124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-410 DeviceMajor:0 DeviceMinor:410 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/957c111d10e2d292281a50f8cc278f441c1f3165b491de07cd91b63ab4d96530/userdata/shm DeviceMajor:0 DeviceMinor:112 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/edc9559c5a629f79661ac5fd3b656fc66e5b478f6eb97f32c266188a17c0e747/userdata/shm DeviceMajor:0 DeviceMinor:99 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-467 DeviceMajor:0 DeviceMinor:467 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-814 DeviceMajor:0 DeviceMinor:814 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/07e2ee4df3da5cd46dd10fb4afd51a212c46737743b9be4c1d162a76d568a6fd/userdata/shm DeviceMajor:0 DeviceMinor:738 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-458 DeviceMajor:0 DeviceMinor:458 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1150 DeviceMajor:0 DeviceMinor:1150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-605 DeviceMajor:0 DeviceMinor:605 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1269 DeviceMajor:0 DeviceMinor:1269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1337 DeviceMajor:0 DeviceMinor:1337 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/913951bb-1702-4b71-862c-a166bc7a62e0/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1104 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-529 DeviceMajor:0 DeviceMinor:529 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-759 DeviceMajor:0 DeviceMinor:759 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-808 DeviceMajor:0 DeviceMinor:808 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1015 DeviceMajor:0 DeviceMinor:1015 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-105 DeviceMajor:0 DeviceMinor:105 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:138 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-331 DeviceMajor:0 DeviceMinor:331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-219 DeviceMajor:0 DeviceMinor:219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-211 DeviceMajor:0 DeviceMinor:211 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-755 DeviceMajor:0 DeviceMinor:755 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-769 DeviceMajor:0 DeviceMinor:769 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-436 DeviceMajor:0 DeviceMinor:436 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1155 DeviceMajor:0 DeviceMinor:1155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~projected/kube-api-access-x7pk6 DeviceMajor:0 DeviceMinor:107 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1168 DeviceMajor:0 DeviceMinor:1168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1227 DeviceMajor:0 DeviceMinor:1227 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fb1eac23-18a5-4706-adcd-81d83e04cd12/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:1060 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a99765f7253d989ecd2ebab9422f8bd50f36c587e8b7eca1057d0e88a540b814/userdata/shm DeviceMajor:0 DeviceMinor:527 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1242 DeviceMajor:0 DeviceMinor:1242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95bb21eb958017bb1c79698309b67c3682dcd7011e9d5aacdb4e7366e93203b8/userdata/shm DeviceMajor:0 DeviceMinor:1320 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:501 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-120 DeviceMajor:0 DeviceMinor:120 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-998 DeviceMajor:0 DeviceMinor:998 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f7b30888-5994-4968-9db6-9533ac60c92e/volumes/kubernetes.io~projected/kube-api-access-fbfdg DeviceMajor:0 DeviceMinor:1233 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:721 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1067 DeviceMajor:0 DeviceMinor:1067 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-394 DeviceMajor:0 DeviceMinor:394 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/484154d0-66c8-4d0e-bf1b-f48d0abfe628/volumes/kubernetes.io~projected/kube-api-access-b6wng DeviceMajor:0 DeviceMinor:139 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f275e79f-923c-4d3a-8ed4-084a122ddcf4/volumes/kubernetes.io~projected/kube-api-access-cmn29 DeviceMajor:0 DeviceMinor:977 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-711 DeviceMajor:0 DeviceMinor:711 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-928 DeviceMajor:0 DeviceMinor:928 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1348 DeviceMajor:0 DeviceMinor:1348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6d07de2e0be321a3aec4da12f4f04e483d7ebf0407264e8a59f6674bcacef82d/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/aa2e9bbc-3962-45f5-a7cc-2dc059409e70/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:964 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/408a9364-3730-4017-b1e4-c85d6a504168/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:447 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/401dbdafe44d87ba9ccf2adf090a2c537b4f84058eb049f0f6795c6752a1a8d0/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1313 DeviceMajor:0 DeviceMinor:1313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-825 DeviceMajor:0 DeviceMinor:825 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e9bd1f48-6d45-4045-b18e-46ce3005d51d/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1223 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ba294358-051a-4f09-b182-710d3d6778c5/volumes/kubernetes.io~projected/kube-api-access-qf2w4 DeviceMajor:0 DeviceMinor:976 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1244 DeviceMajor:0 DeviceMinor:1244 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:241 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4f2c49b4aa155e075775a0da6ce790eafb2a3d3e88c9dbca188493bbec98d810/userdata/shm DeviceMajor:0 DeviceMinor:300 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-897 DeviceMajor:0 DeviceMinor:897 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1041 DeviceMajor:0 DeviceMinor:1041 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/volumes/kubernetes.io~projected/kube-api-access-vxtft DeviceMajor:0 DeviceMinor:536 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-702 DeviceMajor:0 DeviceMinor:702 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-583 DeviceMajor:0 DeviceMinor:583 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-679 DeviceMajor:0 DeviceMinor:679 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-938 DeviceMajor:0 DeviceMinor:938 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a0b7a368-1408-4fc3-ae25-4613b74e7fca/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1154 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-546 DeviceMajor:0 DeviceMinor:546 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1263 DeviceMajor:0 DeviceMinor:1263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1b61063e-775e-421d-bf73-a6ef134293a0/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99134c6775f2c1522a1480fdf36e455e0ea6704e4324711468efadafd1a4b744/userdata/shm DeviceMajor:0 DeviceMinor:577 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64/volumes/kubernetes.io~projected/kube-api-access-cdx88 DeviceMajor:0 DeviceMinor:564 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-399 DeviceMajor:0 DeviceMinor:399 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/88c9d2fb-763f-4405-8d1a-c39039b41d3b/volumes/kubernetes.io~projected/kube-api-access-8qcq9 DeviceMajor:0 DeviceMinor:997 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1201 DeviceMajor:0 DeviceMinor:1201 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d9d71a7a-a751-4de4-9c76-9bac85fe0177/volumes/kubernetes.io~projected/kube-api-access-jkdzb DeviceMajor:0 DeviceMinor:267 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1029 DeviceMajor:0 DeviceMinor:1029 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-354 DeviceMajor:0 DeviceMinor:354 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-180 DeviceMajor:0 DeviceMinor:180 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1135 DeviceMajor:0 DeviceMinor:1135 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b2fa0e56a1525a9dc4cb1eed44cc6376b6ac0d1c2fab2be1bd2cb007a4f90f8a/userdata/shm DeviceMajor:0 DeviceMinor:735 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/846c42631e11b31d77d6f927ca22e80b7cd7d920231f1d2b9f1cfa12101d157e/userdata/shm DeviceMajor:0 DeviceMinor:915 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-942 DeviceMajor:0 DeviceMinor:942 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-900 DeviceMajor:0 DeviceMinor:900 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e9bd1f48-6d45-4045-b18e-46ce3005d51d/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1231 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4ab6f7d6521695677ac09385923bea0cfde2c320361c5f6cbe98ce64b7475b2/userdata/shm DeviceMajor:0 DeviceMinor:1292 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:264 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-489 DeviceMajor:0 DeviceMinor:489 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:621 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1010 DeviceMajor:0 DeviceMinor:1010 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~projected/kube-api-access-9pw88 DeviceMajor:0 DeviceMinor:274 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-670 DeviceMajor:0 DeviceMinor:670 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/105b1eab12eec1f672058dc0900e8488b8bcca272b3ac3b2441b242d73128d7a/userdata/shm DeviceMajor:0 DeviceMinor:282 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-585 DeviceMajor:0 DeviceMinor:585 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:985 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:722 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a0b7a368-1408-4fc3-ae25-4613b74e7fca/volumes/kubernetes.io~projected/kube-api-access-98n4h DeviceMajor:0 DeviceMinor:1156 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-970 DeviceMajor:0 DeviceMinor:970 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-428 DeviceMajor:0 DeviceMinor:428 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/02b45fb8e619cea5ccaf6f782fba75e7a7903a3e4348fde89b8d1bc48406b6c9/userdata/shm DeviceMajor:0 DeviceMinor:724 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/89fb595810896fd574764c1b2babfd4babc84a77caf787d5018047df10f3ac86/userdata/shm DeviceMajor:0 DeviceMinor:72 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-716 DeviceMajor:0 DeviceMinor:716 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-983 DeviceMajor:0 DeviceMinor:983 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1215 DeviceMajor:0 DeviceMinor:1215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-504 DeviceMajor:0 DeviceMinor:504 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-993 DeviceMajor:0 DeviceMinor:993 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-494 DeviceMajor:0 DeviceMinor:494 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:516 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-778 DeviceMajor:0 DeviceMinor:778 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1489d1b6-d8a1-453a-bff3-8adfd4335903/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:335 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b27e0202-8bdb-4a36-8c3e-0c203f7665b8/volumes/kubernetes.io~projected/kube-api-access-zmvtk DeviceMajor:0 DeviceMinor:73 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~projected/kube-api-access-mkz65 DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-647 DeviceMajor:0 DeviceMinor:647 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e9615af2-cad5-4705-9c2f-6f3c97026100/volumes/kubernetes.io~projected/kube-api-access-npfk7 DeviceMajor:0 DeviceMinor:625 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:704 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-185 DeviceMajor:0 DeviceMinor:185 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-674 DeviceMajor:0 DeviceMinor:674 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1252 DeviceMajor:0 DeviceMinor:1252 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/853452fb-1035-4f22-8aeb-9043d150e8ca/volumes/kubernetes.io~projected/kube-api-access-zqkgp DeviceMajor:0 DeviceMinor:47 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-506 DeviceMajor:0 DeviceMinor:506 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e8194cdc-3133-49e2-9579-a747c0bf2b16/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:533 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33442d22098554ef2512c5bbab1d4a284aed4856345ee1eb8654ba065012ab94/userdata/shm DeviceMajor:0 DeviceMinor:675 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-992 DeviceMajor:0 DeviceMinor:992 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/76e543cc5345eb5c53417c9f0b565400b03593c03aa3a1637483c029bb868ef3/userdata/shm DeviceMajor:0 DeviceMinor:166 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~projected/kube-api-access-dfmv6 DeviceMajor:0 DeviceMinor:270 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-940 DeviceMajor:0 DeviceMinor:940 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/408a9364-3730-4017-b1e4-c85d6a504168/volumes/kubernetes.io~projected/kube-api-access-lvw2m DeviceMajor:0 DeviceMinor:454 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c8c3670530b0c671383aade45325850e12f9fcf9f76178c2929f043d5a9b72a3/userdata/shm DeviceMajor:0 DeviceMinor:108 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cc46ef0ea78121e3debb45555162f099169024a83053e72fed30ccbe4c22554d/userdata/shm DeviceMajor:0 DeviceMinor:917 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1ff8802ad134d499fee700156b80ec71b617c31ecfda4162eeae2f5521b198f8/userdata/shm DeviceMajor:0 DeviceMinor:957 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-554 DeviceMajor:0 DeviceMinor:554 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1261 DeviceMajor:0 DeviceMinor:1261 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-372 DeviceMajor:0 DeviceMinor:372 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-901 DeviceMajor:0 DeviceMinor:901 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-486 DeviceMajor:0 DeviceMinor:486 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a0b7a368-1408-4fc3-ae25-4613b74e7fca/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:518 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1282 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/03a5021d-8a5c-4011-a9f9-c5eb38d5f236/volumes/kubernetes.io~projected/kube-api-access-ldzxc DeviceMajor:0 DeviceMinor:909 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-637 DeviceMajor:0 DeviceMinor:637 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/03a5021d-8a5c-4011-a9f9-c5eb38d5f236/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:908 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/3403d2bf-b093-4f2e-80aa-73a3d6bcaffb/volumes/kubernetes.io~projected/kube-api-access-gxhfs DeviceMajor:0 DeviceMinor:1103 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1307 DeviceMajor:0 DeviceMinor:1307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-205 DeviceMajor:0 DeviceMinor:205 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-472 DeviceMajor:0 DeviceMinor:472 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-480 DeviceMajor:0 DeviceMinor:480 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f6ba9fbde2ec0f2099ab53176d9410c4bf53a78507ca46eeb7e91c2f36c118ed/userdata/shm DeviceMajor:0 DeviceMinor:718 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1132 DeviceMajor:0 DeviceMinor:1132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1304 DeviceMajor:0 DeviceMinor:1304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec7dd4ea-a139-45d4-96a4-506da1567292/volumes/kubernetes.io~projected/kube-api-access-9jt7h DeviceMajor:0 DeviceMinor:256 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1e734464d78209c21a7a9eb2f6d22c8584997def010318f287f0cb7c28b7390b/userdata/shm DeviceMajor:0 DeviceMinor:303 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1054 DeviceMajor:0 DeviceMinor:1054 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-664 DeviceMajor:0 DeviceMinor:664 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~projected/kube-api-access-7xgcn DeviceMajor:0 DeviceMinor:1105 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/611833cac10a2c7b92f524745bb3d40c37badfe83dfcc13e97aefe053823dfb9/userdata/shm DeviceMajor:0 DeviceMinor:443 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-767 DeviceMajor:0 DeviceMinor:767 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-374 DeviceMajor:0 DeviceMinor:374 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d7d0416-5f50-42bd-826b-92eecf9adcec/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:947 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7d6eb694-9a3d-49d1-bbc1-74ba4450d673/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1230 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e9bd1f48-6d45-4045-b18e-46ce3005d51d/volumes/kubernetes.io~projected/kube-api-access-wckst DeviceMajor:0 DeviceMinor:1234 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1322 DeviceMajor:0 DeviceMinor:1322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1355 DeviceMajor:0 DeviceMinor:1355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:641 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-851 DeviceMajor:0 DeviceMinor:851 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-543 DeviceMajor:0 DeviceMinor:543 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c073f224d2a8cc60c80044d595d19260d941f19b426f78dc52e84033ff1afedc/userdata/shm DeviceMajor:0 DeviceMinor:299 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-847 DeviceMajor:0 DeviceMinor:847 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2ab0a907-7abe-4808-ba21-bdda1506eae2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:262 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-538 DeviceMajor:0 DeviceMinor:538 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:562 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-765 DeviceMajor:0 DeviceMinor:765 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f7b30888-5994-4968-9db6-9533ac60c92e/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1229 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~projected/kube-api-access-xrc4z DeviceMajor:0 DeviceMinor:1291 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d453639-52ed-4a14-a2ee-02cf9acc2f7c/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:733 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0334ad8c418e31c648e8c938f60c3ae9cf4f68761e776bef5ada2bade3f88833/userdata/shm DeviceMajor:0 DeviceMinor:642 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d731a0126023b327423b0d92ac9091c1188b42fa4686eb6ad7cba3b766448624/userdata/shm DeviceMajor:0 DeviceMinor:736 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-848 DeviceMajor:0 DeviceMinor:848 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a5c8e6b51575e43d26e0817313f1ec460f29cff6ceb6629a7a5e2f186f585513/userdata/shm DeviceMajor:0 DeviceMinor:91 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1034 DeviceMajor:0 DeviceMinor:1034 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:252 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4085413c-9af1-4d2a-ba0f-33b42025cb7f/volumes/kubernetes.io~projected/kube-api-access-dw9lp DeviceMajor:0 DeviceMinor:273 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-700 DeviceMajor:0 DeviceMinor:700 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1290 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1335 DeviceMajor:0 DeviceMinor:1335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-183 DeviceMajor:0 DeviceMinor:183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b6be6de-6fcc-4f57-b163-fe8f970a01a4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cb7c3bcdaae372d84aa4e8a539ce094d23c02279631a56da69b150d86b62b5a5/userdata/shm DeviceMajor:0 DeviceMinor:635 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-663 DeviceMajor:0 DeviceMinor:663 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-596 DeviceMajor:0 DeviceMinor:596 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1086 DeviceMajor:0 DeviceMinor:1086 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-757 DeviceMajor:0 DeviceMinor:757 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1129 DeviceMajor:0 DeviceMinor:1129 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-710 DeviceMajor:0 DeviceMinor:710 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b1ac9776-54c4-46ce-b898-01c8cf35e593/volumes/kubernetes.io~projected/kube-api-access-vzx4s DeviceMajor:0 DeviceMinor:491 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3/volumes/kubernetes.io~projected/kube-api-access-hv45g DeviceMajor:0 DeviceMinor:502 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-567 DeviceMajor:0 DeviceMinor:567 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec7dd4ea-a139-45d4-96a4-506da1567292/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:734 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-972 DeviceMajor:0 DeviceMinor:972 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1096 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1296 DeviceMajor:0 DeviceMinor:1296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:563 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e8194cdc-3133-49e2-9579-a747c0bf2b16/volumes/kubernetes.io~projected/kube-api-access-hxvhm DeviceMajor:0 DeviceMinor:535 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cedd6b186b2f683612167b71883ce9d5bac09eb1edd2f0cb1e7e8286188d3035/userdata/shm DeviceMajor:0 DeviceMinor:580 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-944 DeviceMajor:0 DeviceMinor:944 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-397 DeviceMajor:0 DeviceMinor:397 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/55095f4f-cac0-456c-9ccc-45869392408c/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:912 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/319dc882-e1f5-40f9-99f4-2bae028337e5/volumes/kubernetes.io~projected/kube-api-access-mtrzq DeviceMajor:0 DeviceMinor:906 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-413 DeviceMajor:0 DeviceMinor:413 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:575 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1298 DeviceMajor:0 DeviceMinor:1298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~projected/kube-api-access-qx2kd DeviceMajor:0 DeviceMinor:258 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e7adbe32-b8b9-438e-a2e3-f93146a97424/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:271 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-500 DeviceMajor:0 DeviceMinor:500 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:576 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-907 DeviceMajor:0 DeviceMinor:907 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2dfa08dcecf95c49e6db650a7dbdf117c27ed644f23ff4e264133dd36a509d3c/userdata/shm DeviceMajor:0 DeviceMinor:305 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/55095f4f-cac0-456c-9ccc-45869392408c/volumes/kubernetes.io~projected/kube-api-access-7hnc6 DeviceMajor:0 DeviceMinor:913 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-639 DeviceMajor:0 DeviceMinor:639 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d84a6211eba3f66c2ce7e68ab1344f23f51a23b55442aa18fdabbc1b25bc9adb/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/27e39bf106b6e002c0125d685214889286fc25d34ba09141b24632bec0751f4d/userdata/shm DeviceMajor:0 DeviceMinor:741 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1120 DeviceMajor:0 DeviceMinor:1120 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1128 DeviceMajor:0 DeviceMinor:1128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1099 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-871 DeviceMajor:0 DeviceMinor:871 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0c4934055dbc002aad718ae831c2d636c9e3bd49545da85cae7eace9dea452ac/userdata/shm DeviceMajor:0 DeviceMinor:532 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/385456702c716ef5052af7ff4f8c1f6423867ff9037ec0352d3bef2843cc7641/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/230d9624-2d9d-4036-967b-b530347f05d5/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:93 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-464 DeviceMajor:0 DeviceMinor:464 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1073 DeviceMajor:0 DeviceMinor:1073 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b3fc27d6f88f12abb0f4db12508672dcd9584ab10707e7cd6f06dcebac1bbaa8/userdata/shm DeviceMajor:0 DeviceMinor:293 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-631 DeviceMajor:0 DeviceMinor:631 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/62b7693910cb02952d8855d0ec6b5ec30d5524abd40344dea37279d475bce731/userdata/shm DeviceMajor:0 DeviceMinor:101 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/kube-api-access-5tklr DeviceMajor:0 DeviceMinor:281 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f94d68e1b5a31fd6ac38d04b76b6e3ee908e79aa67afc23e7d2bf54001deb6f0/userdata/shm DeviceMajor:0 DeviceMinor:487 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1183 DeviceMajor:0 DeviceMinor:1183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8d56b871-a53a-4928-8967-a33ea9dcec2a/volumes/kubernetes.io~projected/kube-api-access-22pl9 DeviceMajor:0 DeviceMinor:1330 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9b7b734a04c19ca82d24b6113d7260320b0a9c95bbc6375cd7e4100f7054eb3f/userdata/shm DeviceMajor:0 DeviceMinor:1000 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1126 DeviceMajor:0 DeviceMinor:1126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/913951bb-1702-4b71-862c-a166bc7a62e0/volumes/kubernetes.io~projected/kube-api-access-pgvx2 DeviceMajor:0 DeviceMinor:1116 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-877 DeviceMajor:0 DeviceMinor:877 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-883 DeviceMajor:0 DeviceMinor:883 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1286 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:248 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/319dc882-e1f5-40f9-99f4-2bae028337e5/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:904 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/319dc882-e1f5-40f9-99f4-2bae028337e5/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:90 Feb 16 21:22:49.775044 master-0 kubenswrapper[38936]: 5 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1137 DeviceMajor:0 DeviceMinor:1137 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-601 DeviceMajor:0 DeviceMinor:601 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-598 DeviceMajor:0 DeviceMinor:598 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4b035e85-b2b0-4dee-bb86-3465fc4b98a8/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:729 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/aa2e9bbc-3962-45f5-a7cc-2dc059409e70/volumes/kubernetes.io~projected/kube-api-access-wx8bf DeviceMajor:0 DeviceMinor:627 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-484 DeviceMajor:0 DeviceMinor:484 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0b02b740-5698-4e9a-90fe-2873bd0b0958/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:269 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-772 DeviceMajor:0 DeviceMinor:772 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/913951bb-1702-4b71-862c-a166bc7a62e0/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1092 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-565 DeviceMajor:0 DeviceMinor:565 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1006 DeviceMajor:0 DeviceMinor:1006 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-595 DeviceMajor:0 DeviceMinor:595 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dbf32b84ea4131f980c7517f9adf09ab0debbea21b7d7312f8107de5103e23bd/userdata/shm DeviceMajor:0 DeviceMinor:437 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c6c5fc997a3d90f0f136390ca95bcbc1e110994ac3cdfcc2e3e8e90f78ca1dd9/userdata/shm DeviceMajor:0 DeviceMinor:537 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/88c9d2fb-763f-4405-8d1a-c39039b41d3b/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:982 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1489d1b6-d8a1-453a-bff3-8adfd4335903/volumes/kubernetes.io~projected/kube-api-access-xc47v DeviceMajor:0 DeviceMinor:655 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-123 DeviceMajor:0 DeviceMinor:123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-422 DeviceMajor:0 DeviceMinor:422 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-978 DeviceMajor:0 DeviceMinor:978 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7d6eb694-9a3d-49d1-bbc1-74ba4450d673/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1218 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1246 DeviceMajor:0 DeviceMinor:1246 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e8194cdc-3133-49e2-9579-a747c0bf2b16/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:74 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/302156cc-9dca-4a66-9e6a-ba2c7e738c92/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:827 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/18445cef4b6797ad657a965be9f13f99564dcc29dc7e932a9b359ffe1a1aa1ce/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8b648d9e-a892-4951-b0e2-fed6b16273d4/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:920 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ce229d27-837d-4a98-80fc-d56877ae39b8/volumes/kubernetes.io~projected/kube-api-access-dcwzq DeviceMajor:0 DeviceMinor:589 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-541 DeviceMajor:0 DeviceMinor:541 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-574 DeviceMajor:0 DeviceMinor:574 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f7b30888-5994-4968-9db6-9533ac60c92e/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1222 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:732 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0048dbcae18fdbd149a49da2679d70bbb9de5e907689064aaea0ab32348a1024/userdata/shm DeviceMajor:0 DeviceMinor:745 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-225 DeviceMajor:0 DeviceMinor:225 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-774 DeviceMajor:0 DeviceMinor:774 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e1d55dfca25559f503e3ffffa2f5f036874c5ff002f21e1743ae94ece4a5c2a9/userdata/shm DeviceMajor:0 DeviceMinor:966 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e9615af2-cad5-4705-9c2f-6f3c97026100/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:626 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1188 DeviceMajor:0 DeviceMinor:1188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a5d4ac48-aed3-46b9-9b2a-d741121e05b4/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:692 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:275 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b28234d1-1d9a-4d9f-9ad1-e3c682bed492/volumes/kubernetes.io~projected/kube-api-access-67qzh DeviceMajor:0 DeviceMinor:285 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/98ea530a3e85a55d27f014bb670a7b7e4444aedc192a8b2618c4f1830394b65c/userdata/shm DeviceMajor:0 DeviceMinor:1224 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ebc8d1a24100c636c9029b0eba8d5b6521b906cdbb84675057a80b42a0273bbc/userdata/shm DeviceMajor:0 DeviceMinor:143 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-592 DeviceMajor:0 DeviceMinor:592 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9e0227bc-63f5-48be-95dc-1323a2b2e327/volumes/kubernetes.io~projected/kube-api-access-z9vmp DeviceMajor:0 DeviceMinor:253 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4b035e85-b2b0-4dee-bb86-3465fc4b98a8/volumes/kubernetes.io~projected/kube-api-access-g7nmb DeviceMajor:0 DeviceMinor:272 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8b648d9e-a892-4951-b0e2-fed6b16273d4/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:921 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1d7d0416-5f50-42bd-826b-92eecf9adcec/volumes/kubernetes.io~projected/kube-api-access-25mkq DeviceMajor:0 DeviceMinor:958 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-190 DeviceMajor:0 DeviceMinor:190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fb1eac23-18a5-4706-adcd-81d83e04cd12/volumes/kubernetes.io~projected/kube-api-access-8vcsp DeviceMajor:0 DeviceMinor:1072 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/75d47673076de0f457cf43f09abae17f313fa42a6b18d0c5e8749dffb9564806/userdata/shm DeviceMajor:0 DeviceMinor:1118 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aed3d22aa5c102de3c056d7b1148ad38dc8f06e42bff2232e153f1a44338819c/userdata/shm DeviceMajor:0 DeviceMinor:1226 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d306354fd5d2178f348beb7a119f77d313ccc80e6928076b9869dfc8a33d0edf/userdata/shm DeviceMajor:0 DeviceMinor:739 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b28234d1-1d9a-4d9f-9ad1-e3c682bed492/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:731 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf/volumes/kubernetes.io~projected/kube-api-access-64qvl DeviceMajor:0 DeviceMinor:633 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-366 DeviceMajor:0 DeviceMinor:366 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1114 DeviceMajor:0 DeviceMinor:1114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b9312957dc15df5de566304a0d01d6c55a3f6333b95b61734ba1c6f29131877b/userdata/shm DeviceMajor:0 DeviceMinor:707 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/62935559-041f-4694-9d36-adc809d079b4/volumes/kubernetes.io~projected/kube-api-access-6sq4t DeviceMajor:0 DeviceMinor:125 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1196 DeviceMajor:0 DeviceMinor:1196 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1271 DeviceMajor:0 DeviceMinor:1271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-337 DeviceMajor:0 DeviceMinor:337 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/065fcd43-1572-4152-b77b-a6b7ab52a081/volumes/kubernetes.io~projected/kube-api-access-trcfg DeviceMajor:0 DeviceMinor:384 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a5d4ac48-aed3-46b9-9b2a-d741121e05b4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:678 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8b648d9e-a892-4951-b0e2-fed6b16273d4/volumes/kubernetes.io~projected/kube-api-access-sgj2q DeviceMajor:0 DeviceMinor:926 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~projected/kube-api-access-x85fb DeviceMajor:0 DeviceMinor:161 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-866 DeviceMajor:0 DeviceMinor:866 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b1c5e0970049830739dbde889218d9f83f1d9720ddba4de32c1b5bd6626ed51d/userdata/shm DeviceMajor:0 DeviceMinor:696 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b27de289-c0f9-47ff-aac6-15b7bc1b178a/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:730 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-223 DeviceMajor:0 DeviceMinor:223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1123 DeviceMajor:0 DeviceMinor:1123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1177 DeviceMajor:0 DeviceMinor:1177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-608 DeviceMajor:0 DeviceMinor:608 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1333 DeviceMajor:0 DeviceMinor:1333 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ff193060-a272-4e4e-990a-83ac410f523d/volumes/kubernetes.io~projected/kube-api-access-wmhq9 DeviceMajor:0 DeviceMinor:620 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-850 DeviceMajor:0 DeviceMinor:850 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1039 DeviceMajor:0 DeviceMinor:1039 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1219 DeviceMajor:0 DeviceMinor:1219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-698 DeviceMajor:0 DeviceMinor:698 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-117 DeviceMajor:0 DeviceMinor:117 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/302156cc-9dca-4a66-9e6a-ba2c7e738c92/volumes/kubernetes.io~projected/kube-api-access-zxcg6 DeviceMajor:0 DeviceMinor:828 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-995 DeviceMajor:0 DeviceMinor:995 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:555 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bd49e653-3b42-4950-8f5f-2b2ecb683678/volumes/kubernetes.io~projected/kube-api-access-kf4qg DeviceMajor:0 DeviceMinor:723 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-753 DeviceMajor:0 DeviceMinor:753 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1017 DeviceMajor:0 DeviceMinor:1017 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-990 DeviceMajor:0 DeviceMinor:990 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~projected/kube-api-access-8sd27 DeviceMajor:0 DeviceMinor:247 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3edd59cb6b6314e671425a245027b79b2d561376466e447c62b29ac14f08bcff/userdata/shm DeviceMajor:0 DeviceMinor:967 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-657 DeviceMajor:0 DeviceMinor:657 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-393 DeviceMajor:0 DeviceMinor:393 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-693 DeviceMajor:0 DeviceMinor:693 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-946 DeviceMajor:0 DeviceMinor:946 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6caed68f3fc79ebb1ed9e5bfd3e9f6a4bad90b8a5cdeab5884b6fd52a2305c16/userdata/shm DeviceMajor:0 DeviceMinor:1080 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/da07cd48-b1e8-4ccc-b980-84702cedb042/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1098 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1207 DeviceMajor:0 DeviceMinor:1207 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1257 DeviceMajor:0 DeviceMinor:1257 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/017b12ba663cae17ffc7b3e8cac380511c7277e4c495d7f5a091fa50febd2724/userdata/shm DeviceMajor:0 DeviceMinor:1331 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cef33294-81fb-41a2-811d-2565f94514d1/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:547 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-937 DeviceMajor:0 DeviceMinor:937 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1097 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-841 DeviceMajor:0 DeviceMinor:841 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7d6eb694-9a3d-49d1-bbc1-74ba4450d673/volumes/kubernetes.io~projected/kube-api-access-6jh6l DeviceMajor:0 DeviceMinor:1232 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-726 DeviceMajor:0 DeviceMinor:726 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-444 DeviceMajor:0 DeviceMinor:444 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/065fcd43-1572-4152-b77b-a6b7ab52a081/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:383 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1025 DeviceMajor:0 DeviceMinor:1025 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-380 DeviceMajor:0 DeviceMinor:380 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~projected/kube-api-access-7bcmr DeviceMajor:0 DeviceMinor:268 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/03ed4454e9c6237b864a1dab6c209256c79b0a72cb535e51a70e7b99d3f0689e/userdata/shm DeviceMajor:0 DeviceMinor:92 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-119 DeviceMajor:0 DeviceMinor:119 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/404fdd69be202f40aeca377d1ba146b346077a53f8e7897ed4e324403366c1bf/userdata/shm DeviceMajor:0 DeviceMinor:1117 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~projected/kube-api-access-dqm46 DeviceMajor:0 DeviceMinor:141 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9e9fb9a8fc61dba0936cd38d7b843d3efbdecc6ba9ec73f7423569f6305a4740/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:520 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-548 DeviceMajor:0 DeviceMinor:548 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-672 DeviceMajor:0 DeviceMinor:672 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-749 DeviceMajor:0 DeviceMinor:749 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-347 DeviceMajor:0 DeviceMinor:347 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-934 DeviceMajor:0 DeviceMinor:934 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/88f19cea-60ed-4977-a906-75deec51fc3d/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:165 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/70d217a9-86b7-47b9-a7da-9ac920b9c7c2/volumes/kubernetes.io~projected/kube-api-access-ll4rg DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~projected/kube-api-access-nrc7l DeviceMajor:0 DeviceMinor:257 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7836160a631ad4fabd13fade7e117d0a195ed40a8c1f33bde283fef44ab0f21f/userdata/shm DeviceMajor:0 DeviceMinor:743 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/27c20f63-9bfb-4703-94d5-0c65475e08d1/volumes/kubernetes.io~projected/kube-api-access-hjsnz DeviceMajor:0 DeviceMinor:255 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c4765e33cdc956d84e8349da9b28a001d07fad6c39b6a113416bb9d1d1ae88dd/userdata/shm DeviceMajor:0 DeviceMinor:482 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~projected/kube-api-access-x4djt DeviceMajor:0 DeviceMinor:561 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/db0925be9adc52361772ef921815ff9b0ca5417617347a7d9e8f0049e699014a/userdata/shm DeviceMajor:0 DeviceMinor:629 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-751 DeviceMajor:0 DeviceMinor:751 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5/volumes/kubernetes.io~projected/kube-api-access-dgjlj DeviceMajor:0 DeviceMinor:1315 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd/volumes/kubernetes.io~projected/kube-api-access-p7wrr DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1311 DeviceMajor:0 DeviceMinor:1311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-434 DeviceMajor:0 DeviceMinor:434 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6484af368276a809cf9fc113e39e94b58a7e749f404b7ad55bc0ffd6db6821c5/userdata/shm DeviceMajor:0 DeviceMinor:97 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/684a8167-6c5b-430f-979e-307e58487611/volumes/kubernetes.io~projected/kube-api-access-s9w8k DeviceMajor:0 DeviceMinor:483 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-659 DeviceMajor:0 DeviceMinor:659 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~projected/kube-api-access-vklwz DeviceMajor:0 DeviceMinor:276 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:519 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1036 DeviceMajor:0 DeviceMinor:1036 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d453639-52ed-4a14-a2ee-02cf9acc2f7c/volumes/kubernetes.io~projected/kube-api-access-59kpw DeviceMajor:0 DeviceMinor:135 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/db8564acd67a0d7a69c00ddf2a89b541dc8e61594341a8f533db80c14da1c414/userdata/shm DeviceMajor:0 DeviceMinor:628 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d4599582332a100db8555ba006867716892ce1ecdd5b2f904cbee81575c2c2d/userdata/shm DeviceMajor:0 DeviceMinor:1108 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1240 DeviceMajor:0 DeviceMinor:1240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8d56b871-a53a-4928-8967-a33ea9dcec2a/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1326 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c7333319-3fe6-4b3f-b600-6b6df49fcaff/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/69785167-b4ae-415b-bdcb-029f62effe78/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-1259 DeviceMajor:0 DeviceMinor:1259 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8e70ffdd495dcdb270b1f5bf74d98194840c0bb5429461a2cbed334f4538aeec/userdata/shm DeviceMajor:0 DeviceMinor:95 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-385 DeviceMajor:0 DeviceMinor:385 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d3122711a170f449cbae155070984deb894c3febeb5926b33f03b31158614e34/userdata/shm DeviceMajor:0 DeviceMinor:784 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-798 DeviceMajor:0 DeviceMinor:798 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-873 DeviceMajor:0 DeviceMinor:873 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a4c9b781-14c0-469c-bb9e-0c3982a04520/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-980 DeviceMajor:0 DeviceMinor:980 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/230d9624-2d9d-4036-967b-b530347f05d5/volumes/kubernetes.io~projected/kube-api-access-vqkvs DeviceMajor:0 DeviceMinor:98 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-656 DeviceMajor:0 DeviceMinor:656 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-843 DeviceMajor:0 DeviceMinor:843 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/695549c8-d1fc-429d-9c9f-0a5915dc6074/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:259 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4ff1d9141076f81759691d94a098009541c5d2c236ef8864f1522766d2980580/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d2501eec-47c8-47bc-b0c9-28d94c06075b/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:560 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2e618c5c-52be-4b52-b426-b92555dee9de/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:728 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0dfbee9f7528fe042540e180164336ecf2ece621fbebd18d9dde03c5a49a8d3a/userdata/shm DeviceMajor:0 DeviceMinor:126 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d1ce8d9ee7cab12610683fbe9731b9ea4f3d71878c552326acd5722dd5f1b61a/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b/volumes/kubernetes.io~projected/kube-api-access-vddxb DeviceMajor:0 DeviceMinor:455 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/59237aa6-6250-4619-8ee5-abae59f04b57/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/75ca3e4fc5da353a0ea31c674632f3429b17eb41f067d771200d9b0aea75af5d/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5e062e07-8076-444c-b476-4eb2848e9613/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:261 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1140 DeviceMajor:0 DeviceMinor:1140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-469 DeviceMajor:0 DeviceMinor:469 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1befa239880012918c5014596ebf2ea1e19a17105f1c62212a86bd3326b1986f/userdata/shm DeviceMajor:0 DeviceMinor:1106 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2506c282-0b37-4ece-8a0c-885d0b7f7901/volumes/kubernetes.io~projected/kube-api-access-6qd6r DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b27de289-c0f9-47ff-aac6-15b7bc1b178a/volumes/kubernetes.io~projected/kube-api-access-fx4tz DeviceMajor:0 DeviceMinor:254 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-388 DeviceMajor:0 DeviceMinor:388 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-651 DeviceMajor:0 DeviceMinor:651 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1190 DeviceMajor:0 DeviceMinor:1190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cb99eaa7ceffb734068bb188738c361f8400867f02f0acef09f3dcc317540b0e/userdata/shm DeviceMajor:0 DeviceMinor:1238 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-832 DeviceMajor:0 DeviceMinor:832 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/acec58956615bf5fc5d4c728869e591e541d368aa9b045c7975cb5d8c938ff55/userdata/shm DeviceMajor:0 DeviceMinor:1004 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/34743ce3-5eda-4c60-99cb-640dd067ebdf/volumes/kubernetes.io~projected/kube-api-access-vzm2t DeviceMajor:0 DeviceMinor:634 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0048dbcae18fdbd MacAddress:66:85:10:88:d0:ac Speed:10000 Mtu:8900} {Name:017b12ba663cae1 MacAddress:0a:b7:dc:48:93:d4 Speed:10000 Mtu:8900} {Name:02b45fb8e619cea MacAddress:1e:5c:e3:38:9d:9f Speed:10000 Mtu:8900} {Name:0334ad8c418e31c MacAddress:9e:ed:25:15:7e:54 Speed:10000 Mtu:8900} {Name:03ed4454e9c6237 MacAddress:d2:3e:9e:36:a9:e6 Speed:10000 Mtu:8900} {Name:07e2ee4df3da5cd MacAddress:72:af:54:75:1d:bd Speed:10000 Mtu:8900} {Name:0855efbb779255f MacAddress:aa:97:0c:24:32:6b Speed:10000 Mtu:8900} {Name:0c4934055dbc002 MacAddress:c6:3d:c0:86:53:79 Speed:10000 Mtu:8900} {Name:105b1eab12eec1f MacAddress:c6:fc:bc:f8:09:22 Speed:10000 Mtu:8900} {Name:1befa2398800129 MacAddress:b2:92:f8:8a:5f:e5 Speed:10000 Mtu:8900} {Name:1e734464d78209c MacAddress:c2:2e:77:bc:ce:42 Speed:10000 Mtu:8900} {Name:1ff8802ad134d49 MacAddress:c2:fa:42:43:00:1d Speed:10000 Mtu:8900} {Name:27e39bf106b6e00 MacAddress:1e:ad:74:25:fe:12 Speed:10000 Mtu:8900} {Name:2dfa08dcecf95c4 MacAddress:ce:f5:60:e3:ab:ac Speed:10000 Mtu:8900} {Name:33442d22098554e MacAddress:1a:6c:56:98:08:02 Speed:10000 Mtu:8900} {Name:385456702c716ef MacAddress:5a:ec:15:1f:49:6c Speed:10000 Mtu:8900} {Name:3edd59cb6b6314e MacAddress:ca:2d:54:d6:c7:7a Speed:10000 Mtu:8900} {Name:4f2c49b4aa155e0 MacAddress:8e:a0:32:da:3d:ac Speed:10000 Mtu:8900} {Name:4ff1d9141076f81 MacAddress:6a:53:76:05:32:44 Speed:10000 Mtu:8900} {Name:6caed68f3fc79eb MacAddress:22:58:31:b9:4c:78 Speed:10000 Mtu:8900} {Name:6d07de2e0be321a MacAddress:fa:31:87:79:9f:6d Speed:10000 Mtu:8900} {Name:74e6be503344338 MacAddress:22:d0:dc:ac:b6:0a Speed:10000 Mtu:8900} {Name:75ca3e4fc5da353 MacAddress:0a:a0:ee:c1:05:ba Speed:10000 Mtu:8900} {Name:75d47673076de0f MacAddress:52:aa:b8:3a:21:90 Speed:10000 Mtu:8900} {Name:7836160a631ad4f MacAddress:66:1e:c1:ef:fd:80 Speed:10000 Mtu:8900} {Name:846c42631e11b31 MacAddress:0a:bd:f9:6a:9f:76 Speed:10000 Mtu:8900} {Name:89fb595810896fd MacAddress:ca:04:dc:c9:82:6f Speed:10000 Mtu:8900} {Name:8e70ffdd495dcdb MacAddress:be:30:2d:9a:a3:fe Speed:10000 Mtu:8900} {Name:98ea530a3e85a55 MacAddress:2a:cb:e0:89:cd:43 Speed:10000 Mtu:8900} {Name:99134c6775f2c15 MacAddress:9e:c4:cb:b8:14:33 Speed:10000 Mtu:8900} {Name:9b7b734a04c19ca MacAddress:02:39:8f:c6:4c:50 Speed:10000 Mtu:8900} {Name:a5c8e6b51575e43 MacAddress:0a:27:a1:b3:35:71 Speed:10000 Mtu:8900} {Name:a99765f7253d989 MacAddress:7e:9e:73:34:4f:72 Speed:10000 Mtu:8900} {Name:ad196ac4d2e3966 MacAddress:32:ef:f2:ae:70:5f Speed:10000 Mtu:8900} {Name:b2fa0e56a1525a9 MacAddress:6a:bf:21:9e:bf:34 Speed:10000 Mtu:8900} {Name:b3fc27d6f88f12a MacAddress:6a:6b:5d:93:b8:91 Speed:10000 Mtu:8900} {Name:b4ab6f7d6521695 MacAddress:4a:28:cf:a9:5e:53 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:3e:37:3f:36:71:15 Speed:0 Mtu:8900} {Name:c073f224d2a8cc6 MacAddress:22:92:49:d7:5c:05 Speed:10000 Mtu:8900} {Name:c4765e33cdc956d MacAddress:ce:0b:91:68:4e:27 Speed:10000 Mtu:8900} {Name:c6c5fc997a3d90f MacAddress:f2:33:3f:2b:e5:a9 Speed:10000 Mtu:8900} {Name:cb99eaa7ceffb73 MacAddress:6a:45:32:ac:6b:ae Speed:10000 Mtu:8900} {Name:cc46ef0ea78121e MacAddress:02:49:1c:85:18:c2 Speed:10000 Mtu:8900} {Name:cedd6b186b2f683 MacAddress:e2:4a:89:46:48:9f Speed:10000 Mtu:8900} {Name:d1ce8d9ee7cab12 MacAddress:9a:af:a7:de:17:51 Speed:10000 Mtu:8900} {Name:d2b7935cea946c9 MacAddress:9e:e1:61:6d:8e:c1 Speed:10000 Mtu:8900} {Name:d306354fd5d2178 MacAddress:3a:57:34:9e:c3:2a Speed:10000 Mtu:8900} {Name:d3122711a170f44 MacAddress:72:85:79:f9:5d:60 Speed:10000 Mtu:8900} {Name:d3647391d6c6aea MacAddress:1e:63:db:76:45:4a Speed:10000 Mtu:8900} {Name:d731a0126023b32 MacAddress:be:8c:66:1e:be:67 Speed:10000 Mtu:8900} {Name:d84a6211eba3f66 MacAddress:82:4e:cf:cb:b4:80 Speed:10000 Mtu:8900} {Name:db0925be9adc523 MacAddress:6e:8a:de:a6:5a:d8 Speed:10000 Mtu:8900} {Name:db18d33d279edf7 MacAddress:4e:a8:6c:14:7c:42 Speed:10000 Mtu:8900} {Name:db8564acd67a0d7 MacAddress:ae:35:b3:62:9c:b7 Speed:10000 Mtu:8900} {Name:dbf32b84ea4131f MacAddress:e2:1d:d8:33:48:87 Speed:10000 Mtu:8900} {Name:e1d55dfca25559f MacAddress:0a:48:4c:e9:31:84 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:ff:a7:37 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:31:16:05 Speed:-1 Mtu:9000} {Name:f6ba9fbde2ec0f2 MacAddress:52:4f:21:bd:f4:76 Speed:10000 Mtu:8900} {Name:f94d68e1b5a31fd MacAddress:56:f6:1b:de:bf:f7 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:d2:11:f7:5e:7f:07 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 21:22:49.775044 master-0 kubenswrapper[38936]: I0216 21:22:49.774319 38936 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 21:22:49.775044 master-0 kubenswrapper[38936]: I0216 21:22:49.774415 38936 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 21:22:49.775044 master-0 kubenswrapper[38936]: I0216 21:22:49.774638 38936 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 21:22:49.775044 master-0 kubenswrapper[38936]: I0216 21:22:49.774879 38936 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.774916 38936 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.775131 38936 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.775143 38936 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.775153 38936 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.775177 38936 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.775216 38936 state_mem.go:36] "Initialized new in-memory state store" Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.775316 38936 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.775381 38936 kubelet.go:418] "Attempting to sync node with API server" Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.775397 38936 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.775415 38936 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.775428 38936 kubelet.go:324] "Adding apiserver pod source" Feb 16 21:22:49.775531 master-0 kubenswrapper[38936]: I0216 21:22:49.775440 38936 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 21:22:49.778189 master-0 kubenswrapper[38936]: I0216 21:22:49.777986 38936 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 21:22:49.778272 master-0 kubenswrapper[38936]: I0216 21:22:49.778253 38936 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 21:22:49.779028 master-0 kubenswrapper[38936]: I0216 21:22:49.778983 38936 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 21:22:49.779243 master-0 kubenswrapper[38936]: I0216 21:22:49.779224 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 21:22:49.779291 master-0 kubenswrapper[38936]: I0216 21:22:49.779252 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 21:22:49.779291 master-0 kubenswrapper[38936]: I0216 21:22:49.779263 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 21:22:49.779291 master-0 kubenswrapper[38936]: I0216 21:22:49.779272 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 21:22:49.779291 master-0 kubenswrapper[38936]: I0216 21:22:49.779282 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 21:22:49.779291 master-0 kubenswrapper[38936]: I0216 21:22:49.779291 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 21:22:49.779459 master-0 kubenswrapper[38936]: I0216 21:22:49.779302 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 21:22:49.779459 master-0 kubenswrapper[38936]: I0216 21:22:49.779311 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 21:22:49.779459 master-0 kubenswrapper[38936]: I0216 21:22:49.779329 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 21:22:49.779459 master-0 kubenswrapper[38936]: I0216 21:22:49.779338 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 21:22:49.779459 master-0 kubenswrapper[38936]: I0216 21:22:49.779397 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 21:22:49.779459 master-0 kubenswrapper[38936]: I0216 21:22:49.779414 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 21:22:49.779459 master-0 kubenswrapper[38936]: I0216 21:22:49.779456 38936 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 21:22:49.784673 master-0 kubenswrapper[38936]: I0216 21:22:49.781791 38936 server.go:1280] "Started kubelet" Feb 16 21:22:49.784673 master-0 kubenswrapper[38936]: I0216 21:22:49.782120 38936 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 21:22:49.784673 master-0 kubenswrapper[38936]: I0216 21:22:49.782131 38936 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 21:22:49.784673 master-0 kubenswrapper[38936]: I0216 21:22:49.782217 38936 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 21:22:49.784673 master-0 kubenswrapper[38936]: I0216 21:22:49.782675 38936 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 21:22:49.784673 master-0 kubenswrapper[38936]: I0216 21:22:49.783786 38936 server.go:449] "Adding debug handlers to kubelet server" Feb 16 21:22:49.782465 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 21:22:49.795395 master-0 kubenswrapper[38936]: I0216 21:22:49.795353 38936 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 21:22:49.808924 master-0 kubenswrapper[38936]: E0216 21:22:49.808831 38936 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 16 21:22:49.811140 master-0 kubenswrapper[38936]: I0216 21:22:49.811027 38936 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 21:22:49.817117 master-0 kubenswrapper[38936]: I0216 21:22:49.816905 38936 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 21:22:49.817117 master-0 kubenswrapper[38936]: I0216 21:22:49.816980 38936 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 21:22:49.817290 master-0 kubenswrapper[38936]: I0216 21:22:49.817173 38936 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 20:47:22 +0000 UTC, rotation deadline is 2026-02-17 17:40:15.96883191 +0000 UTC Feb 16 21:22:49.817290 master-0 kubenswrapper[38936]: I0216 21:22:49.817226 38936 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h17m26.151609267s for next certificate rotation Feb 16 21:22:49.817718 master-0 kubenswrapper[38936]: I0216 21:22:49.817639 38936 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 21:22:49.817718 master-0 kubenswrapper[38936]: I0216 21:22:49.817716 38936 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 21:22:49.817885 master-0 kubenswrapper[38936]: I0216 21:22:49.817865 38936 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 21:22:49.819819 master-0 kubenswrapper[38936]: I0216 21:22:49.819756 38936 factory.go:55] Registering systemd factory Feb 16 21:22:49.819819 master-0 kubenswrapper[38936]: I0216 21:22:49.819783 38936 factory.go:221] Registration of the systemd container factory successfully Feb 16 21:22:49.820229 master-0 kubenswrapper[38936]: I0216 21:22:49.820196 38936 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 21:22:49.820569 master-0 kubenswrapper[38936]: I0216 21:22:49.820547 38936 factory.go:153] Registering CRI-O factory Feb 16 21:22:49.820569 master-0 kubenswrapper[38936]: I0216 21:22:49.820565 38936 factory.go:221] Registration of the crio container factory successfully Feb 16 21:22:49.820687 master-0 kubenswrapper[38936]: I0216 21:22:49.820640 38936 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 21:22:49.820687 master-0 kubenswrapper[38936]: I0216 21:22:49.820683 38936 factory.go:103] Registering Raw factory Feb 16 21:22:49.820766 master-0 kubenswrapper[38936]: I0216 21:22:49.820699 38936 manager.go:1196] Started watching for new ooms in manager Feb 16 21:22:49.821194 master-0 kubenswrapper[38936]: I0216 21:22:49.821172 38936 manager.go:319] Starting recovery of all containers Feb 16 21:22:49.831157 master-0 kubenswrapper[38936]: I0216 21:22:49.830757 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3403d2bf-b093-4f2e-80aa-73a3d6bcaffb" volumeName="kubernetes.io/projected/3403d2bf-b093-4f2e-80aa-73a3d6bcaffb-kube-api-access-gxhfs" seLinuxMountContext="" Feb 16 21:22:49.831977 master-0 kubenswrapper[38936]: I0216 21:22:49.831878 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17aaf0e1-e9c7-486c-83fc-47d71f5e1f64" volumeName="kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-tmp" seLinuxMountContext="" Feb 16 21:22:49.832068 master-0 kubenswrapper[38936]: I0216 21:22:49.832008 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4085413c-9af1-4d2a-ba0f-33b42025cb7f" volumeName="kubernetes.io/projected/4085413c-9af1-4d2a-ba0f-33b42025cb7f-kube-api-access-dw9lp" seLinuxMountContext="" Feb 16 21:22:49.832068 master-0 kubenswrapper[38936]: I0216 21:22:49.832037 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b035e85-b2b0-4dee-bb86-3465fc4b98a8" volumeName="kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.832068 master-0 kubenswrapper[38936]: I0216 21:22:49.832059 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9bd1f48-6d45-4045-b18e-46ce3005d51d" volumeName="kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 16 21:22:49.832068 master-0 kubenswrapper[38936]: I0216 21:22:49.832072 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf" volumeName="kubernetes.io/projected/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-kube-api-access-64qvl" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832086 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff193060-a272-4e4e-990a-83ac410f523d" volumeName="kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-images" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832099 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="065fcd43-1572-4152-b77b-a6b7ab52a081" volumeName="kubernetes.io/projected/065fcd43-1572-4152-b77b-a6b7ab52a081-kube-api-access-trcfg" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832120 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d6eb694-9a3d-49d1-bbc1-74ba4450d673" volumeName="kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832131 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" volumeName="kubernetes.io/configmap/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-config" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832142 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99ab949e-bd0d-45a7-95d1-8381d9f1f5f3" volumeName="kubernetes.io/projected/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-kube-api-access-hv45g" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832191 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" volumeName="kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-stats-auth" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832206 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8d648c7-b84b-4f43-84c9-903aead0891a" volumeName="kubernetes.io/projected/d8d648c7-b84b-4f43-84c9-903aead0891a-kube-api-access-nq9c5" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832223 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a9f4f96-ca31-4959-93fe-c094caf8e077" volumeName="kubernetes.io/empty-dir/4a9f4f96-ca31-4959-93fe-c094caf8e077-audit-log" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832235 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69785167-b4ae-415b-bdcb-029f62effe78" volumeName="kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-script-lib" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832245 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="853452fb-1035-4f22-8aeb-9043d150e8ca" volumeName="kubernetes.io/projected/853452fb-1035-4f22-8aeb-9043d150e8ca-kube-api-access-zqkgp" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832259 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" volumeName="kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-metrics-certs" seLinuxMountContext="" Feb 16 21:22:49.832277 master-0 kubenswrapper[38936]: I0216 21:22:49.832269 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce229d27-837d-4a98-80fc-d56877ae39b8" volumeName="kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-catalog-content" seLinuxMountContext="" Feb 16 21:22:49.836602 master-0 kubenswrapper[38936]: I0216 21:22:49.832280 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" volumeName="kubernetes.io/projected/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-kube-api-access-59kpw" seLinuxMountContext="" Feb 16 21:22:49.836879 master-0 kubenswrapper[38936]: I0216 21:22:49.836815 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88c9d2fb-763f-4405-8d1a-c39039b41d3b" volumeName="kubernetes.io/secret/88c9d2fb-763f-4405-8d1a-c39039b41d3b-proxy-tls" seLinuxMountContext="" Feb 16 21:22:49.837001 master-0 kubenswrapper[38936]: I0216 21:22:49.836979 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0b7a368-1408-4fc3-ae25-4613b74e7fca" volumeName="kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 21:22:49.837248 master-0 kubenswrapper[38936]: I0216 21:22:49.837137 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2501eec-47c8-47bc-b0c9-28d94c06075b" volumeName="kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-trusted-ca-bundle" seLinuxMountContext="" Feb 16 21:22:49.837420 master-0 kubenswrapper[38936]: I0216 21:22:49.837352 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec7dd4ea-a139-45d4-96a4-506da1567292" volumeName="kubernetes.io/configmap/ec7dd4ea-a139-45d4-96a4-506da1567292-telemetry-config" seLinuxMountContext="" Feb 16 21:22:49.837420 master-0 kubenswrapper[38936]: I0216 21:22:49.837384 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59237aa6-6250-4619-8ee5-abae59f04b57" volumeName="kubernetes.io/secret/59237aa6-6250-4619-8ee5-abae59f04b57-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.837548 master-0 kubenswrapper[38936]: I0216 21:22:49.837455 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd" volumeName="kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls" seLinuxMountContext="" Feb 16 21:22:49.837548 master-0 kubenswrapper[38936]: I0216 21:22:49.837483 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1b61063e-775e-421d-bf73-a6ef134293a0" volumeName="kubernetes.io/projected/1b61063e-775e-421d-bf73-a6ef134293a0-kube-api-access-x7pk6" seLinuxMountContext="" Feb 16 21:22:49.837668 master-0 kubenswrapper[38936]: I0216 21:22:49.837561 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cef33294-81fb-41a2-811d-2565f94514d1" volumeName="kubernetes.io/configmap/cef33294-81fb-41a2-811d-2565f94514d1-trusted-ca" seLinuxMountContext="" Feb 16 21:22:49.837729 master-0 kubenswrapper[38936]: I0216 21:22:49.837675 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="065fcd43-1572-4152-b77b-a6b7ab52a081" volumeName="kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-config" seLinuxMountContext="" Feb 16 21:22:49.837729 master-0 kubenswrapper[38936]: I0216 21:22:49.837700 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2501eec-47c8-47bc-b0c9-28d94c06075b" volumeName="kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit" seLinuxMountContext="" Feb 16 21:22:49.837729 master-0 kubenswrapper[38936]: I0216 21:22:49.837719 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="853452fb-1035-4f22-8aeb-9043d150e8ca" volumeName="kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-utilities" seLinuxMountContext="" Feb 16 21:22:49.837874 master-0 kubenswrapper[38936]: I0216 21:22:49.837789 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa2e9bbc-3962-45f5-a7cc-2dc059409e70" volumeName="kubernetes.io/projected/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-kube-api-access-wx8bf" seLinuxMountContext="" Feb 16 21:22:49.837874 master-0 kubenswrapper[38936]: I0216 21:22:49.837808 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b27e0202-8bdb-4a36-8c3e-0c203f7665b8" volumeName="kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cni-binary-copy" seLinuxMountContext="" Feb 16 21:22:49.837874 master-0 kubenswrapper[38936]: I0216 21:22:49.837852 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27c20f63-9bfb-4703-94d5-0c65475e08d1" volumeName="kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-config" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.837883 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69785167-b4ae-415b-bdcb-029f62effe78" volumeName="kubernetes.io/secret/69785167-b4ae-415b-bdcb-029f62effe78-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.837945 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="484154d0-66c8-4d0e-bf1b-f48d0abfe628" volumeName="kubernetes.io/secret/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.837973 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a9f4f96-ca31-4959-93fe-c094caf8e077" volumeName="kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.838028 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0b7a368-1408-4fc3-ae25-4613b74e7fca" volumeName="kubernetes.io/configmap/a0b7a368-1408-4fc3-ae25-4613b74e7fca-metrics-client-ca" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.838054 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8194cdc-3133-49e2-9579-a747c0bf2b16" volumeName="kubernetes.io/empty-dir/e8194cdc-3133-49e2-9579-a747c0bf2b16-cache" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.838070 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d903d23-8e0b-424b-bcd0-e0a00f306e49" volumeName="kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.838139 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" volumeName="kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-kube-api-access-vxtft" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.838163 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2506c282-0b37-4ece-8a0c-885d0b7f7901" volumeName="kubernetes.io/projected/2506c282-0b37-4ece-8a0c-885d0b7f7901-kube-api-access-6qd6r" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.838218 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e618c5c-52be-4b52-b426-b92555dee9de" volumeName="kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.838255 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" volumeName="kubernetes.io/configmap/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.838271 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69785167-b4ae-415b-bdcb-029f62effe78" volumeName="kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-config" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.838519 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d56b871-a53a-4928-8967-a33ea9dcec2a" volumeName="kubernetes.io/projected/8d56b871-a53a-4928-8967-a33ea9dcec2a-kube-api-access-22pl9" seLinuxMountContext="" Feb 16 21:22:49.838580 master-0 kubenswrapper[38936]: I0216 21:22:49.838544 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd49e653-3b42-4950-8f5f-2b2ecb683678" volumeName="kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-client" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.838943 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="065fcd43-1572-4152-b77b-a6b7ab52a081" volumeName="kubernetes.io/secret/065fcd43-1572-4152-b77b-a6b7ab52a081-machine-approver-tls" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.838970 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" volumeName="kubernetes.io/projected/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-kube-api-access-7xgcn" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.839019 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d6eb694-9a3d-49d1-bbc1-74ba4450d673" volumeName="kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.839032 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5d4ac48-aed3-46b9-9b2a-d741121e05b4" volumeName="kubernetes.io/configmap/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-service-ca" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.839050 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9bd1f48-6d45-4045-b18e-46ce3005d51d" volumeName="kubernetes.io/empty-dir/e9bd1f48-6d45-4045-b18e-46ce3005d51d-volume-directive-shadow" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.839067 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27c20f63-9bfb-4703-94d5-0c65475e08d1" volumeName="kubernetes.io/secret/27c20f63-9bfb-4703-94d5-0c65475e08d1-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.839140 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd49e653-3b42-4950-8f5f-2b2ecb683678" volumeName="kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-policies" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.839191 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="065fcd43-1572-4152-b77b-a6b7ab52a081" volumeName="kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-auth-proxy-config" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.839212 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d453639-52ed-4a14-a2ee-02cf9acc2f7c" volumeName="kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.839238 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf" volumeName="kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.839311 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b648d9e-a892-4951-b0e2-fed6b16273d4" volumeName="kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cert" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.839351 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7b30888-5994-4968-9db6-9533ac60c92e" volumeName="kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls" seLinuxMountContext="" Feb 16 21:22:49.839375 master-0 kubenswrapper[38936]: I0216 21:22:49.839368 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb1eac23-18a5-4706-adcd-81d83e04cd12" volumeName="kubernetes.io/secret/fb1eac23-18a5-4706-adcd-81d83e04cd12-proxy-tls" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839502 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03a5021d-8a5c-4011-a9f9-c5eb38d5f236" volumeName="kubernetes.io/projected/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-kube-api-access-ldzxc" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839520 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62935559-041f-4694-9d36-adc809d079b4" volumeName="kubernetes.io/projected/62935559-041f-4694-9d36-adc809d079b4-kube-api-access-6sq4t" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839569 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1ac9776-54c4-46ce-b898-01c8cf35e593" volumeName="kubernetes.io/projected/b1ac9776-54c4-46ce-b898-01c8cf35e593-kube-api-access-vzx4s" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839593 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b27e0202-8bdb-4a36-8c3e-0c203f7665b8" volumeName="kubernetes.io/projected/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-kube-api-access-zmvtk" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839612 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2501eec-47c8-47bc-b0c9-28d94c06075b" volumeName="kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-config" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839673 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9bd1f48-6d45-4045-b18e-46ce3005d51d" volumeName="kubernetes.io/projected/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-api-access-wckst" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839690 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a9f4f96-ca31-4959-93fe-c094caf8e077" volumeName="kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839729 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9615af2-cad5-4705-9c2f-6f3c97026100" volumeName="kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-service-ca-bundle" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839758 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d6eb694-9a3d-49d1-bbc1-74ba4450d673" volumeName="kubernetes.io/empty-dir/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-textfile" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839775 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88c9d2fb-763f-4405-8d1a-c39039b41d3b" volumeName="kubernetes.io/projected/88c9d2fb-763f-4405-8d1a-c39039b41d3b-kube-api-access-8qcq9" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839846 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9615af2-cad5-4705-9c2f-6f3c97026100" volumeName="kubernetes.io/projected/e9615af2-cad5-4705-9c2f-6f3c97026100-kube-api-access-npfk7" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839866 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55095f4f-cac0-456c-9ccc-45869392408c" volumeName="kubernetes.io/secret/55095f4f-cac0-456c-9ccc-45869392408c-samples-operator-tls" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839906 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="484154d0-66c8-4d0e-bf1b-f48d0abfe628" volumeName="kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovnkube-config" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839931 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-config" seLinuxMountContext="" Feb 16 21:22:49.839934 master-0 kubenswrapper[38936]: I0216 21:22:49.839947 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-ca" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.839964 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d6eb694-9a3d-49d1-bbc1-74ba4450d673" volumeName="kubernetes.io/configmap/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-metrics-client-ca" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840014 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa2e9bbc-3962-45f5-a7cc-2dc059409e70" volumeName="kubernetes.io/secret/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840033 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03a5021d-8a5c-4011-a9f9-c5eb38d5f236" volumeName="kubernetes.io/configmap/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cco-trusted-ca" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840088 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" volumeName="kubernetes.io/projected/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-kube-api-access-dgjlj" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840102 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="408a9364-3730-4017-b1e4-c85d6a504168" volumeName="kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840118 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62935559-041f-4694-9d36-adc809d079b4" volumeName="kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-whereabouts-configmap" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840129 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b648d9e-a892-4951-b0e2-fed6b16273d4" volumeName="kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-config" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840207 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8194cdc-3133-49e2-9579-a747c0bf2b16" volumeName="kubernetes.io/secret/e8194cdc-3133-49e2-9579-a747c0bf2b16-catalogserver-certs" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840300 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ab0a907-7abe-4808-ba21-bdda1506eae2" volumeName="kubernetes.io/configmap/2ab0a907-7abe-4808-ba21-bdda1506eae2-config" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840323 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf" volumeName="kubernetes.io/configmap/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-config-volume" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840352 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88c9d2fb-763f-4405-8d1a-c39039b41d3b" volumeName="kubernetes.io/configmap/88c9d2fb-763f-4405-8d1a-c39039b41d3b-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840369 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd49e653-3b42-4950-8f5f-2b2ecb683678" volumeName="kubernetes.io/projected/bd49e653-3b42-4950-8f5f-2b2ecb683678-kube-api-access-kf4qg" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840388 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9d71a7a-a751-4de4-9c76-9bac85fe0177" volumeName="kubernetes.io/configmap/d9d71a7a-a751-4de4-9c76-9bac85fe0177-iptables-alerter-script" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840408 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7b30888-5994-4968-9db6-9533ac60c92e" volumeName="kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840423 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1489d1b6-d8a1-453a-bff3-8adfd4335903" volumeName="kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.840480 master-0 kubenswrapper[38936]: I0216 21:22:49.840500 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="913951bb-1702-4b71-862c-a166bc7a62e0" volumeName="kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-certs" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.840551 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2501eec-47c8-47bc-b0c9-28d94c06075b" volumeName="kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-image-import-ca" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.840624 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d7d0416-5f50-42bd-826b-92eecf9adcec" volumeName="kubernetes.io/projected/1d7d0416-5f50-42bd-826b-92eecf9adcec-kube-api-access-25mkq" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.840713 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="408a9364-3730-4017-b1e4-c85d6a504168" volumeName="kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.840774 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.840791 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b648d9e-a892-4951-b0e2-fed6b16273d4" volumeName="kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.840804 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1489d1b6-d8a1-453a-bff3-8adfd4335903" volumeName="kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.840878 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" volumeName="kubernetes.io/secret/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.840911 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-service-ca" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.840958 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e062e07-8076-444c-b476-4eb2848e9613" volumeName="kubernetes.io/empty-dir/5e062e07-8076-444c-b476-4eb2848e9613-operand-assets" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.840978 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="695549c8-d1fc-429d-9c9f-0a5915dc6074" volumeName="kubernetes.io/secret/695549c8-d1fc-429d-9c9f-0a5915dc6074-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.841000 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0b7a368-1408-4fc3-ae25-4613b74e7fca" volumeName="kubernetes.io/projected/a0b7a368-1408-4fc3-ae25-4613b74e7fca-kube-api-access-98n4h" seLinuxMountContext="" Feb 16 21:22:49.841103 master-0 kubenswrapper[38936]: I0216 21:22:49.841025 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" volumeName="kubernetes.io/configmap/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841155 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba294358-051a-4f09-b182-710d3d6778c5" volumeName="kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-config" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841182 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a9f4f96-ca31-4959-93fe-c094caf8e077" volumeName="kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841244 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1489d1b6-d8a1-453a-bff3-8adfd4335903" volumeName="kubernetes.io/projected/1489d1b6-d8a1-453a-bff3-8adfd4335903-kube-api-access-xc47v" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841282 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a9f4f96-ca31-4959-93fe-c094caf8e077" volumeName="kubernetes.io/projected/4a9f4f96-ca31-4959-93fe-c094caf8e077-kube-api-access-xrc4z" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841314 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="913951bb-1702-4b71-862c-a166bc7a62e0" volumeName="kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-node-bootstrap-token" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841338 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9bd1f48-6d45-4045-b18e-46ce3005d51d" volumeName="kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-metrics-client-ca" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841363 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b02b740-5698-4e9a-90fe-2873bd0b0958" volumeName="kubernetes.io/secret/0b02b740-5698-4e9a-90fe-2873bd0b0958-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841394 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="484154d0-66c8-4d0e-bf1b-f48d0abfe628" volumeName="kubernetes.io/projected/484154d0-66c8-4d0e-bf1b-f48d0abfe628-kube-api-access-b6wng" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841424 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55095f4f-cac0-456c-9ccc-45869392408c" volumeName="kubernetes.io/projected/55095f4f-cac0-456c-9ccc-45869392408c-kube-api-access-7hnc6" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841449 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" volumeName="kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-default-certificate" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841513 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff193060-a272-4e4e-990a-83ac410f523d" volumeName="kubernetes.io/secret/ff193060-a272-4e4e-990a-83ac410f523d-proxy-tls" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841542 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="408a9364-3730-4017-b1e4-c85d6a504168" volumeName="kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841561 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27c20f63-9bfb-4703-94d5-0c65475e08d1" volumeName="kubernetes.io/projected/27c20f63-9bfb-4703-94d5-0c65475e08d1-kube-api-access-hjsnz" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841580 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b27e0202-8bdb-4a36-8c3e-0c203f7665b8" volumeName="kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-daemon-config" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841598 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9d71a7a-a751-4de4-9c76-9bac85fe0177" volumeName="kubernetes.io/projected/d9d71a7a-a751-4de4-9c76-9bac85fe0177-kube-api-access-jkdzb" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841618 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da07cd48-b1e8-4ccc-b980-84702cedb042" volumeName="kubernetes.io/secret/da07cd48-b1e8-4ccc-b980-84702cedb042-tls-certificates" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841634 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d7d0416-5f50-42bd-826b-92eecf9adcec" volumeName="kubernetes.io/secret/1d7d0416-5f50-42bd-826b-92eecf9adcec-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841694 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7333319-3fe6-4b3f-b600-6b6df49fcaff" volumeName="kubernetes.io/projected/c7333319-3fe6-4b3f-b600-6b6df49fcaff-kube-api-access-qx2kd" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841717 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e618c5c-52be-4b52-b426-b92555dee9de" volumeName="kubernetes.io/projected/2e618c5c-52be-4b52-b426-b92555dee9de-kube-api-access-nrc7l" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841734 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="913951bb-1702-4b71-862c-a166bc7a62e0" volumeName="kubernetes.io/projected/913951bb-1702-4b71-862c-a166bc7a62e0-kube-api-access-pgvx2" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841766 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd49e653-3b42-4950-8f5f-2b2ecb683678" volumeName="kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-encryption-config" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841787 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-client" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841844 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d6eb694-9a3d-49d1-bbc1-74ba4450d673" volumeName="kubernetes.io/projected/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-kube-api-access-6jh6l" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841898 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8d648c7-b84b-4f43-84c9-903aead0891a" volumeName="kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-utilities" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841912 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7adbe32-b8b9-438e-a2e3-f93146a97424" volumeName="kubernetes.io/projected/e7adbe32-b8b9-438e-a2e3-f93146a97424-kube-api-access" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841926 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff193060-a272-4e4e-990a-83ac410f523d" volumeName="kubernetes.io/projected/ff193060-a272-4e4e-990a-83ac410f523d-kube-api-access-wmhq9" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.841993 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="230d9624-2d9d-4036-967b-b530347f05d5" volumeName="kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-auth-proxy-config" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842013 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="302156cc-9dca-4a66-9e6a-ba2c7e738c92" volumeName="kubernetes.io/projected/302156cc-9dca-4a66-9e6a-ba2c7e738c92-kube-api-access-zxcg6" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842028 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59237aa6-6250-4619-8ee5-abae59f04b57" volumeName="kubernetes.io/empty-dir/59237aa6-6250-4619-8ee5-abae59f04b57-available-featuregates" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842068 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd49e653-3b42-4950-8f5f-2b2ecb683678" volumeName="kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-serving-ca" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842091 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2501eec-47c8-47bc-b0c9-28d94c06075b" volumeName="kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-encryption-config" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842103 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9615af2-cad5-4705-9c2f-6f3c97026100" volumeName="kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-trusted-ca-bundle" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842117 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1489d1b6-d8a1-453a-bff3-8adfd4335903" volumeName="kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842172 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f275e79f-923c-4d3a-8ed4-084a122ddcf4" volumeName="kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-utilities" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842249 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2506c282-0b37-4ece-8a0c-885d0b7f7901" volumeName="kubernetes.io/configmap/2506c282-0b37-4ece-8a0c-885d0b7f7901-trusted-ca" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842271 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59237aa6-6250-4619-8ee5-abae59f04b57" volumeName="kubernetes.io/projected/59237aa6-6250-4619-8ee5-abae59f04b57-kube-api-access-vklwz" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842284 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="853452fb-1035-4f22-8aeb-9043d150e8ca" volumeName="kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-catalog-content" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842297 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99ab949e-bd0d-45a7-95d1-8381d9f1f5f3" volumeName="kubernetes.io/configmap/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-cabundle" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842344 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27c20f63-9bfb-4703-94d5-0c65475e08d1" volumeName="kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-trusted-ca-bundle" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842364 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b27de289-c0f9-47ff-aac6-15b7bc1b178a" volumeName="kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842380 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cef33294-81fb-41a2-811d-2565f94514d1" volumeName="kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-bound-sa-token" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842413 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9bd1f48-6d45-4045-b18e-46ce3005d51d" volumeName="kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842438 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" volumeName="kubernetes.io/empty-dir/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-ready" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842453 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="319dc882-e1f5-40f9-99f4-2bae028337e5" volumeName="kubernetes.io/projected/319dc882-e1f5-40f9-99f4-2bae028337e5-kube-api-access-mtrzq" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842465 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62935559-041f-4694-9d36-adc809d079b4" volumeName="kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-binary-copy" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842508 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" volumeName="kubernetes.io/configmap/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-config" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842524 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4c9b781-14c0-469c-bb9e-0c3982a04520" volumeName="kubernetes.io/projected/a4c9b781-14c0-469c-bb9e-0c3982a04520-kube-api-access-8sd27" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842537 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ab0a907-7abe-4808-ba21-bdda1506eae2" volumeName="kubernetes.io/secret/2ab0a907-7abe-4808-ba21-bdda1506eae2-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842580 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e618c5c-52be-4b52-b426-b92555dee9de" volumeName="kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-profile-collector-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842594 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="484154d0-66c8-4d0e-bf1b-f48d0abfe628" volumeName="kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-env-overrides" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842608 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e062e07-8076-444c-b476-4eb2848e9613" volumeName="kubernetes.io/projected/5e062e07-8076-444c-b476-4eb2848e9613-kube-api-access-dfmv6" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842621 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec7dd4ea-a139-45d4-96a4-506da1567292" volumeName="kubernetes.io/projected/ec7dd4ea-a139-45d4-96a4-506da1567292-kube-api-access-9jt7h" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842632 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="230d9624-2d9d-4036-967b-b530347f05d5" volumeName="kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-images" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842693 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69785167-b4ae-415b-bdcb-029f62effe78" volumeName="kubernetes.io/projected/69785167-b4ae-415b-bdcb-029f62effe78-kube-api-access-dqm46" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842709 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" volumeName="kubernetes.io/projected/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-kube-api-access-ll4rg" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842721 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88f19cea-60ed-4977-a906-75deec51fc3d" volumeName="kubernetes.io/projected/88f19cea-60ed-4977-a906-75deec51fc3d-kube-api-access-x85fb" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842733 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="302156cc-9dca-4a66-9e6a-ba2c7e738c92" volumeName="kubernetes.io/secret/302156cc-9dca-4a66-9e6a-ba2c7e738c92-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842769 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ab0a907-7abe-4808-ba21-bdda1506eae2" volumeName="kubernetes.io/projected/2ab0a907-7abe-4808-ba21-bdda1506eae2-kube-api-access-9pw88" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842785 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a9f4f96-ca31-4959-93fe-c094caf8e077" volumeName="kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842796 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="684a8167-6c5b-430f-979e-307e58487611" volumeName="kubernetes.io/projected/684a8167-6c5b-430f-979e-307e58487611-kube-api-access-s9w8k" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842807 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" volumeName="kubernetes.io/projected/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-kube-api-access-67qzh" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842942 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba294358-051a-4f09-b182-710d3d6778c5" volumeName="kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-images" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842964 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9615af2-cad5-4705-9c2f-6f3c97026100" volumeName="kubernetes.io/secret/e9615af2-cad5-4705-9c2f-6f3c97026100-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.842981 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17aaf0e1-e9c7-486c-83fc-47d71f5e1f64" volumeName="kubernetes.io/projected/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-kube-api-access-cdx88" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843017 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="319dc882-e1f5-40f9-99f4-2bae028337e5" volumeName="kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-apiservice-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843030 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2506c282-0b37-4ece-8a0c-885d0b7f7901" volumeName="kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843048 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" volumeName="kubernetes.io/secret/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843061 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd49e653-3b42-4950-8f5f-2b2ecb683678" volumeName="kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843116 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cef33294-81fb-41a2-811d-2565f94514d1" volumeName="kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843137 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="230d9624-2d9d-4036-967b-b530347f05d5" volumeName="kubernetes.io/projected/230d9624-2d9d-4036-967b-b530347f05d5-kube-api-access-vqkvs" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843153 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="319dc882-e1f5-40f9-99f4-2bae028337e5" volumeName="kubernetes.io/empty-dir/319dc882-e1f5-40f9-99f4-2bae028337e5-tmpfs" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843195 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="319dc882-e1f5-40f9-99f4-2bae028337e5" volumeName="kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-webhook-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843210 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88f19cea-60ed-4977-a906-75deec51fc3d" volumeName="kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-env-overrides" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843221 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d56b871-a53a-4928-8967-a33ea9dcec2a" volumeName="kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843235 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0227bc-63f5-48be-95dc-1323a2b2e327" volumeName="kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-bound-sa-token" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843246 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2501eec-47c8-47bc-b0c9-28d94c06075b" volumeName="kubernetes.io/projected/d2501eec-47c8-47bc-b0c9-28d94c06075b-kube-api-access-x4djt" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843287 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="230d9624-2d9d-4036-967b-b530347f05d5" volumeName="kubernetes.io/secret/230d9624-2d9d-4036-967b-b530347f05d5-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843300 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1b61063e-775e-421d-bf73-a6ef134293a0" volumeName="kubernetes.io/secret/1b61063e-775e-421d-bf73-a6ef134293a0-metrics-tls" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843312 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69785167-b4ae-415b-bdcb-029f62effe78" volumeName="kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-env-overrides" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843325 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7adbe32-b8b9-438e-a2e3-f93146a97424" volumeName="kubernetes.io/configmap/e7adbe32-b8b9-438e-a2e3-f93146a97424-config" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843336 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9bd1f48-6d45-4045-b18e-46ce3005d51d" volumeName="kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843370 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec7dd4ea-a139-45d4-96a4-506da1567292" volumeName="kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843383 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" volumeName="kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-ca-certs" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843394 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b648d9e-a892-4951-b0e2-fed6b16273d4" volumeName="kubernetes.io/projected/8b648d9e-a892-4951-b0e2-fed6b16273d4-kube-api-access-sgj2q" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843406 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4c9b781-14c0-469c-bb9e-0c3982a04520" volumeName="kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-profile-collector-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843417 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f275e79f-923c-4d3a-8ed4-084a122ddcf4" volumeName="kubernetes.io/projected/f275e79f-923c-4d3a-8ed4-084a122ddcf4-kube-api-access-cmn29" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843469 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b02b740-5698-4e9a-90fe-2873bd0b0958" volumeName="kubernetes.io/projected/0b02b740-5698-4e9a-90fe-2873bd0b0958-kube-api-access" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843480 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b648d9e-a892-4951-b0e2-fed6b16273d4" volumeName="kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-images" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843491 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba294358-051a-4f09-b182-710d3d6778c5" volumeName="kubernetes.io/secret/ba294358-051a-4f09-b182-710d3d6778c5-machine-api-operator-tls" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843524 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce229d27-837d-4a98-80fc-d56877ae39b8" volumeName="kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-utilities" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843540 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2501eec-47c8-47bc-b0c9-28d94c06075b" volumeName="kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-serving-ca" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843559 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9615af2-cad5-4705-9c2f-6f3c97026100" volumeName="kubernetes.io/empty-dir/e9615af2-cad5-4705-9c2f-6f3c97026100-snapshots" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843573 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb1eac23-18a5-4706-adcd-81d83e04cd12" volumeName="kubernetes.io/projected/fb1eac23-18a5-4706-adcd-81d83e04cd12-kube-api-access-8vcsp" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843583 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="408a9364-3730-4017-b1e4-c85d6a504168" volumeName="kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843628 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4b035e85-b2b0-4dee-bb86-3465fc4b98a8" volumeName="kubernetes.io/projected/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-kube-api-access-g7nmb" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843723 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88f19cea-60ed-4977-a906-75deec51fc3d" volumeName="kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843740 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8194cdc-3133-49e2-9579-a747c0bf2b16" volumeName="kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-ca-certs" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843788 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34743ce3-5eda-4c60-99cb-640dd067ebdf" volumeName="kubernetes.io/projected/34743ce3-5eda-4c60-99cb-640dd067ebdf-kube-api-access-vzm2t" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843804 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5d4ac48-aed3-46b9-9b2a-d741121e05b4" volumeName="kubernetes.io/projected/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-kube-api-access" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843818 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" volumeName="kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843849 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7333319-3fe6-4b3f-b600-6b6df49fcaff" volumeName="kubernetes.io/secret/c7333319-3fe6-4b3f-b600-6b6df49fcaff-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843865 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7adbe32-b8b9-438e-a2e3-f93146a97424" volumeName="kubernetes.io/secret/e7adbe32-b8b9-438e-a2e3-f93146a97424-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843882 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0227bc-63f5-48be-95dc-1323a2b2e327" volumeName="kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-kube-api-access-z9vmp" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843893 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03a5021d-8a5c-4011-a9f9-c5eb38d5f236" volumeName="kubernetes.io/secret/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843946 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="695549c8-d1fc-429d-9c9f-0a5915dc6074" volumeName="kubernetes.io/projected/695549c8-d1fc-429d-9c9f-0a5915dc6074-kube-api-access-7bcmr" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843959 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a4c9b781-14c0-469c-bb9e-0c3982a04520" volumeName="kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.843984 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5d4ac48-aed3-46b9-9b2a-d741121e05b4" volumeName="kubernetes.io/secret/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.844024 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce229d27-837d-4a98-80fc-d56877ae39b8" volumeName="kubernetes.io/projected/ce229d27-837d-4a98-80fc-d56877ae39b8-kube-api-access-dcwzq" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.844044 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="408a9364-3730-4017-b1e4-c85d6a504168" volumeName="kubernetes.io/projected/408a9364-3730-4017-b1e4-c85d6a504168-kube-api-access-lvw2m" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.844067 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99ab949e-bd0d-45a7-95d1-8381d9f1f5f3" volumeName="kubernetes.io/secret/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-key" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.844123 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7b30888-5994-4968-9db6-9533ac60c92e" volumeName="kubernetes.io/configmap/f7b30888-5994-4968-9db6-9533ac60c92e-metrics-client-ca" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.844142 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62935559-041f-4694-9d36-adc809d079b4" volumeName="kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.844155 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0b7a368-1408-4fc3-ae25-4613b74e7fca" volumeName="kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.844194 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" volumeName="kubernetes.io/configmap/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-service-ca-bundle" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.844220 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f275e79f-923c-4d3a-8ed4-084a122ddcf4" volumeName="kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-catalog-content" seLinuxMountContext="" Feb 16 21:22:49.844124 master-0 kubenswrapper[38936]: I0216 21:22:49.844235 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2506c282-0b37-4ece-8a0c-885d0b7f7901" volumeName="kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844289 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17aaf0e1-e9c7-486c-83fc-47d71f5e1f64" volumeName="kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-tuned" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844302 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d7d0416-5f50-42bd-826b-92eecf9adcec" volumeName="kubernetes.io/configmap/1d7d0416-5f50-42bd-826b-92eecf9adcec-auth-proxy-config" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844317 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b" volumeName="kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844375 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" volumeName="kubernetes.io/empty-dir/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-cache" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844444 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27c20f63-9bfb-4703-94d5-0c65475e08d1" volumeName="kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-service-ca-bundle" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844478 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="695549c8-d1fc-429d-9c9f-0a5915dc6074" volumeName="kubernetes.io/configmap/695549c8-d1fc-429d-9c9f-0a5915dc6074-config" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844517 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0227bc-63f5-48be-95dc-1323a2b2e327" volumeName="kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844555 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cef33294-81fb-41a2-811d-2565f94514d1" volumeName="kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-kube-api-access-5tklr" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844611 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2501eec-47c8-47bc-b0c9-28d94c06075b" volumeName="kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-client" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844628 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff193060-a272-4e4e-990a-83ac410f523d" volumeName="kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-auth-proxy-config" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844643 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b02b740-5698-4e9a-90fe-2873bd0b0958" volumeName="kubernetes.io/configmap/0b02b740-5698-4e9a-90fe-2873bd0b0958-config" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.844783 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c7333319-3fe6-4b3f-b600-6b6df49fcaff" volumeName="kubernetes.io/configmap/c7333319-3fe6-4b3f-b600-6b6df49fcaff-config" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846151 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b27de289-c0f9-47ff-aac6-15b7bc1b178a" volumeName="kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846166 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" volumeName="kubernetes.io/projected/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-kube-api-access" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846217 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd49e653-3b42-4950-8f5f-2b2ecb683678" volumeName="kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-trusted-ca-bundle" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846228 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2501eec-47c8-47bc-b0c9-28d94c06075b" volumeName="kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846249 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8194cdc-3133-49e2-9579-a747c0bf2b16" volumeName="kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-kube-api-access-hxvhm" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846263 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd" volumeName="kubernetes.io/projected/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-kube-api-access-p7wrr" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846293 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88f19cea-60ed-4977-a906-75deec51fc3d" volumeName="kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846308 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8d648c7-b84b-4f43-84c9-903aead0891a" volumeName="kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-catalog-content" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846327 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7b30888-5994-4968-9db6-9533ac60c92e" volumeName="kubernetes.io/projected/f7b30888-5994-4968-9db6-9533ac60c92e-kube-api-access-fbfdg" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846338 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b" volumeName="kubernetes.io/projected/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-kube-api-access-vddxb" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846400 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba294358-051a-4f09-b182-710d3d6778c5" volumeName="kubernetes.io/projected/ba294358-051a-4f09-b182-710d3d6778c5-kube-api-access-qf2w4" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846412 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb1eac23-18a5-4706-adcd-81d83e04cd12" volumeName="kubernetes.io/configmap/fb1eac23-18a5-4706-adcd-81d83e04cd12-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846460 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5e062e07-8076-444c-b476-4eb2848e9613" volumeName="kubernetes.io/secret/5e062e07-8076-444c-b476-4eb2848e9613-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846472 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0227bc-63f5-48be-95dc-1323a2b2e327" volumeName="kubernetes.io/configmap/9e0227bc-63f5-48be-95dc-1323a2b2e327-trusted-ca" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846483 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" volumeName="kubernetes.io/projected/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-kube-api-access-mkz65" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846495 38936 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a9f4f96-ca31-4959-93fe-c094caf8e077" volumeName="kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls" seLinuxMountContext="" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846550 38936 reconstruct.go:97] "Volume reconstruction finished" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.846564 38936 reconciler.go:26] "Reconciler: start to sync state" Feb 16 21:22:49.850910 master-0 kubenswrapper[38936]: I0216 21:22:49.849745 38936 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 21:22:49.870372 master-0 kubenswrapper[38936]: I0216 21:22:49.870280 38936 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 21:22:49.872704 master-0 kubenswrapper[38936]: I0216 21:22:49.872673 38936 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 21:22:49.872790 master-0 kubenswrapper[38936]: I0216 21:22:49.872713 38936 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 21:22:49.872790 master-0 kubenswrapper[38936]: I0216 21:22:49.872736 38936 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 21:22:49.872872 master-0 kubenswrapper[38936]: E0216 21:22:49.872800 38936 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 21:22:49.875717 master-0 kubenswrapper[38936]: I0216 21:22:49.875651 38936 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 21:22:49.884162 master-0 kubenswrapper[38936]: I0216 21:22:49.882385 38936 generic.go:334] "Generic (PLEG): container finished" podID="ebeb6876-0438-4961-a62a-68b41a676f17" containerID="ba4091698915c4aa641aec2c8b4b82e0a58aec68f9f33e7955121f8e822a443d" exitCode=0 Feb 16 21:22:49.896238 master-0 kubenswrapper[38936]: I0216 21:22:49.896179 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_7fc3abc9-3012-43bd-af84-fc65baf82801/installer/0.log" Feb 16 21:22:49.896449 master-0 kubenswrapper[38936]: I0216 21:22:49.896241 38936 generic.go:334] "Generic (PLEG): container finished" podID="7fc3abc9-3012-43bd-af84-fc65baf82801" containerID="7705ab1783cfe260a257da3d99d4c43b8aa6602286bbd8b5854c2a525ae4f204" exitCode=1 Feb 16 21:22:49.907814 master-0 kubenswrapper[38936]: I0216 21:22:49.907496 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-7p9ft_7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e/kube-controller-manager-operator/5.log" Feb 16 21:22:49.907814 master-0 kubenswrapper[38936]: I0216 21:22:49.907564 38936 generic.go:334] "Generic (PLEG): container finished" podID="7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e" containerID="4b9eed56cd9de27df8732f0bf589198f3bec398bab1ee5d8d5d4047198bdc2b3" exitCode=1 Feb 16 21:22:49.916705 master-0 kubenswrapper[38936]: I0216 21:22:49.916626 38936 generic.go:334] "Generic (PLEG): container finished" podID="3d416d98-ee7c-4481-9721-861ccd91685d" containerID="8bbcb4e0fb94b168b2c18c0ad45486fda3e89c4340348d1ee5d8cea24b562c67" exitCode=0 Feb 16 21:22:49.921243 master-0 kubenswrapper[38936]: I0216 21:22:49.921210 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-k42w9_695549c8-d1fc-429d-9c9f-0a5915dc6074/openshift-controller-manager-operator/4.log" Feb 16 21:22:49.921345 master-0 kubenswrapper[38936]: I0216 21:22:49.921245 38936 generic.go:334] "Generic (PLEG): container finished" podID="695549c8-d1fc-429d-9c9f-0a5915dc6074" containerID="abce7c467580f27265b653bd89f53e6e0d6413f3687b039b9f58c8dd18d3f0ce" exitCode=255 Feb 16 21:22:49.926918 master-0 kubenswrapper[38936]: I0216 21:22:49.926875 38936 generic.go:334] "Generic (PLEG): container finished" podID="0cecc93e-bb0e-47da-903f-d0b63cce2b0d" containerID="8df27f209e925f58d0b4923f79cdb9bec01f45d38cbc22684566e7e609148bab" exitCode=0 Feb 16 21:22:49.929589 master-0 kubenswrapper[38936]: I0216 21:22:49.929551 38936 generic.go:334] "Generic (PLEG): container finished" podID="1f8a26db-5a90-4da9-9074-33256ef17100" containerID="f3ca6870e03df61b2f0b4d124dc1734d96c0b5c71852fc980d271a8f385f1958" exitCode=0 Feb 16 21:22:49.932465 master-0 kubenswrapper[38936]: I0216 21:22:49.932442 38936 generic.go:334] "Generic (PLEG): container finished" podID="2cf5e26c-84a2-45c6-b7dc-ee96dad23175" containerID="912bdb89c47c0c84a626b5915d0082c84d6ad6cfcb759d646e64bf4849456d1f" exitCode=0 Feb 16 21:22:49.934933 master-0 kubenswrapper[38936]: I0216 21:22:49.934910 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-75b869db96-g4w5m_aa2e9bbc-3962-45f5-a7cc-2dc059409e70/cluster-storage-operator/2.log" Feb 16 21:22:49.935102 master-0 kubenswrapper[38936]: I0216 21:22:49.934939 38936 generic.go:334] "Generic (PLEG): container finished" podID="aa2e9bbc-3962-45f5-a7cc-2dc059409e70" containerID="86b2625e01e86e20ad843cc517b662e8d0574773dfe24c22fbbf50abc8c0ea7f" exitCode=255 Feb 16 21:22:49.946833 master-0 kubenswrapper[38936]: I0216 21:22:49.946793 38936 generic.go:334] "Generic (PLEG): container finished" podID="69785167-b4ae-415b-bdcb-029f62effe78" containerID="d7022d510b5111f523030386d2b2e3f81b8551ed9e8be0ecf6a80ac34378ca5e" exitCode=0 Feb 16 21:22:49.949315 master-0 kubenswrapper[38936]: I0216 21:22:49.949291 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c696dbdcd-9m94g_4b035e85-b2b0-4dee-bb86-3465fc4b98a8/package-server-manager/1.log" Feb 16 21:22:49.949623 master-0 kubenswrapper[38936]: I0216 21:22:49.949599 38936 generic.go:334] "Generic (PLEG): container finished" podID="4b035e85-b2b0-4dee-bb86-3465fc4b98a8" containerID="fa5e5b86ee6d022e914514c6e1b9bc40b0ded23b4d78a78dbc84ca8df5d3a2bd" exitCode=1 Feb 16 21:22:49.952707 master-0 kubenswrapper[38936]: I0216 21:22:49.952684 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-xzww8_e7adbe32-b8b9-438e-a2e3-f93146a97424/kube-scheduler-operator-container/3.log" Feb 16 21:22:49.952778 master-0 kubenswrapper[38936]: I0216 21:22:49.952722 38936 generic.go:334] "Generic (PLEG): container finished" podID="e7adbe32-b8b9-438e-a2e3-f93146a97424" containerID="6a7d7b13e17869969e9d31d79faa72dfb3a8d8453f67a2323e3dc0a1300a1e65" exitCode=255 Feb 16 21:22:49.954603 master-0 kubenswrapper[38936]: I0216 21:22:49.954192 38936 generic.go:334] "Generic (PLEG): container finished" podID="1677883f-bae2-4b6e-9dfe-683a6d26f2c5" containerID="b251b8636a6a11ccf532a9af9a8852c95e1a7cdd48031754c8a88d40620a2450" exitCode=0 Feb 16 21:22:49.957550 master-0 kubenswrapper[38936]: I0216 21:22:49.957427 38936 generic.go:334] "Generic (PLEG): container finished" podID="9e0227bc-63f5-48be-95dc-1323a2b2e327" containerID="a7330b931340d1be5dba0fd54e8b246009c00f6e813142a46ee5264b4ff67461" exitCode=0 Feb 16 21:22:49.964420 master-0 kubenswrapper[38936]: I0216 21:22:49.964376 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/6.log" Feb 16 21:22:49.964776 master-0 kubenswrapper[38936]: I0216 21:22:49.964745 38936 generic.go:334] "Generic (PLEG): container finished" podID="cef33294-81fb-41a2-811d-2565f94514d1" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" exitCode=1 Feb 16 21:22:49.968800 master-0 kubenswrapper[38936]: I0216 21:22:49.968780 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-56v4p_c7333319-3fe6-4b3f-b600-6b6df49fcaff/kube-storage-version-migrator-operator/5.log" Feb 16 21:22:49.968864 master-0 kubenswrapper[38936]: I0216 21:22:49.968810 38936 generic.go:334] "Generic (PLEG): container finished" podID="c7333319-3fe6-4b3f-b600-6b6df49fcaff" containerID="220f76e0bb64fd419313cb573cd97bbb54f9d2b5998f9525c7d9045abc13cfb5" exitCode=1 Feb 16 21:22:49.972276 master-0 kubenswrapper[38936]: I0216 21:22:49.972256 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb_ba294358-051a-4f09-b182-710d3d6778c5/machine-api-operator/0.log" Feb 16 21:22:49.972514 master-0 kubenswrapper[38936]: I0216 21:22:49.972491 38936 generic.go:334] "Generic (PLEG): container finished" podID="ba294358-051a-4f09-b182-710d3d6778c5" containerID="c7880afa219acb0ac5e4138682f8fc8b3e3931790fad2a804808d6e2f5933f3f" exitCode=255 Feb 16 21:22:49.973150 master-0 kubenswrapper[38936]: E0216 21:22:49.973121 38936 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 21:22:49.978157 master-0 kubenswrapper[38936]: I0216 21:22:49.978138 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-tpj6f_88f19cea-60ed-4977-a906-75deec51fc3d/approver/1.log" Feb 16 21:22:49.978508 master-0 kubenswrapper[38936]: I0216 21:22:49.978465 38936 generic.go:334] "Generic (PLEG): container finished" podID="88f19cea-60ed-4977-a906-75deec51fc3d" containerID="035e7d01b329ab00b5fb0dd3b6a5b55ee6bd504dee86517456bdcc1b06cd6e19" exitCode=1 Feb 16 21:22:49.980480 master-0 kubenswrapper[38936]: I0216 21:22:49.980452 38936 generic.go:334] "Generic (PLEG): container finished" podID="d8d648c7-b84b-4f43-84c9-903aead0891a" containerID="8510067c1b5f7cbc40f7c23faf036a1b9404f3ea036ff9582a8f6c06389e7238" exitCode=0 Feb 16 21:22:49.980480 master-0 kubenswrapper[38936]: I0216 21:22:49.980478 38936 generic.go:334] "Generic (PLEG): container finished" podID="d8d648c7-b84b-4f43-84c9-903aead0891a" containerID="fa3ed852335cb1ddfb20c47ba698ccaa6874c674cd87c8ada57d89856c7d37fd" exitCode=0 Feb 16 21:22:49.982124 master-0 kubenswrapper[38936]: I0216 21:22:49.982094 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-8gnq5_27c20f63-9bfb-4703-94d5-0c65475e08d1/authentication-operator/6.log" Feb 16 21:22:49.982178 master-0 kubenswrapper[38936]: I0216 21:22:49.982127 38936 generic.go:334] "Generic (PLEG): container finished" podID="27c20f63-9bfb-4703-94d5-0c65475e08d1" containerID="cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba" exitCode=1 Feb 16 21:22:49.985743 master-0 kubenswrapper[38936]: I0216 21:22:49.985717 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-q5vjl_2ab0a907-7abe-4808-ba21-bdda1506eae2/service-ca-operator/3.log" Feb 16 21:22:49.985806 master-0 kubenswrapper[38936]: I0216 21:22:49.985758 38936 generic.go:334] "Generic (PLEG): container finished" podID="2ab0a907-7abe-4808-ba21-bdda1506eae2" containerID="715050d13195531641370ad04c7754b8cef8bb72e0896de25aaafb35a02054c9" exitCode=255 Feb 16 21:22:49.987116 master-0 kubenswrapper[38936]: I0216 21:22:49.987097 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_e300ec3a145c1339a627607b3c84b99d/kube-apiserver-check-endpoints/0.log" Feb 16 21:22:49.998357 master-0 kubenswrapper[38936]: I0216 21:22:49.998298 38936 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891" exitCode=0 Feb 16 21:22:50.003887 master-0 kubenswrapper[38936]: I0216 21:22:50.003833 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-67bf55ccdd-8cllz_70d217a9-86b7-47b9-a7da-9ac920b9c7c2/etcd-operator/3.log" Feb 16 21:22:50.004043 master-0 kubenswrapper[38936]: I0216 21:22:50.003906 38936 generic.go:334] "Generic (PLEG): container finished" podID="70d217a9-86b7-47b9-a7da-9ac920b9c7c2" containerID="316bcd2b73e15fab60d8618d92eb77f101f2f53e423adb64b0f374a1f7fcda3a" exitCode=137 Feb 16 21:22:50.010194 master-0 kubenswrapper[38936]: I0216 21:22:50.010159 38936 generic.go:334] "Generic (PLEG): container finished" podID="f275e79f-923c-4d3a-8ed4-084a122ddcf4" containerID="8e09cadaa280b2142d1e553cf5915c3779b8daaeed82dcb8adbf18accee60298" exitCode=0 Feb 16 21:22:50.010194 master-0 kubenswrapper[38936]: I0216 21:22:50.010189 38936 generic.go:334] "Generic (PLEG): container finished" podID="f275e79f-923c-4d3a-8ed4-084a122ddcf4" containerID="a976e4b82843842a71c3126eb2ebdd642e517cc73242b40b185d375d47043cde" exitCode=0 Feb 16 21:22:50.011665 master-0 kubenswrapper[38936]: I0216 21:22:50.011625 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-ff6c9b66-kh4d4_2506c282-0b37-4ece-8a0c-885d0b7f7901/cluster-node-tuning-operator/1.log" Feb 16 21:22:50.011749 master-0 kubenswrapper[38936]: I0216 21:22:50.011677 38936 generic.go:334] "Generic (PLEG): container finished" podID="2506c282-0b37-4ece-8a0c-885d0b7f7901" containerID="c78e5502c7df20a63c6e359691ad6478f7f26c7822d2c31d3780654e26b107fb" exitCode=1 Feb 16 21:22:50.013294 master-0 kubenswrapper[38936]: I0216 21:22:50.013250 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-cl5ld_0b02b740-5698-4e9a-90fe-2873bd0b0958/kube-apiserver-operator/4.log" Feb 16 21:22:50.013359 master-0 kubenswrapper[38936]: I0216 21:22:50.013295 38936 generic.go:334] "Generic (PLEG): container finished" podID="0b02b740-5698-4e9a-90fe-2873bd0b0958" containerID="71d2f873a3383c5d4e4ea361c9b4723201e4600cb1f7ea3ef5cecd7778b39d86" exitCode=1 Feb 16 21:22:50.015033 master-0 kubenswrapper[38936]: I0216 21:22:50.014999 38936 generic.go:334] "Generic (PLEG): container finished" podID="319dc882-e1f5-40f9-99f4-2bae028337e5" containerID="203b091a662b4912838a798e07794a8caa755508028a6b4fa5f1ef8b83de89af" exitCode=0 Feb 16 21:22:50.016732 master-0 kubenswrapper[38936]: I0216 21:22:50.016713 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-tvzdw_6b6be6de-6fcc-4f57-b163-fe8f970a01a4/openshift-apiserver-operator/3.log" Feb 16 21:22:50.016783 master-0 kubenswrapper[38936]: I0216 21:22:50.016740 38936 generic.go:334] "Generic (PLEG): container finished" podID="6b6be6de-6fcc-4f57-b163-fe8f970a01a4" containerID="d0e5f8a907c4851af3bce655e141083b0f633fdfa41c5abacbb48a7df33f9e94" exitCode=255 Feb 16 21:22:50.018744 master-0 kubenswrapper[38936]: I0216 21:22:50.018705 38936 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="432794b20c117ef5563701790110e26447eca7921c053c44497fb8bd396c6901" exitCode=0 Feb 16 21:22:50.020332 master-0 kubenswrapper[38936]: I0216 21:22:50.020295 38936 generic.go:334] "Generic (PLEG): container finished" podID="b28234d1-1d9a-4d9f-9ad1-e3c682bed492" containerID="4255d701755ee16eefc4f64ff2a1d87789d35c023038a0daf9f7cd0b69fb26a7" exitCode=0 Feb 16 21:22:50.021943 master-0 kubenswrapper[38936]: I0216 21:22:50.021920 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965/installer/0.log" Feb 16 21:22:50.022032 master-0 kubenswrapper[38936]: I0216 21:22:50.021951 38936 generic.go:334] "Generic (PLEG): container finished" podID="9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965" containerID="5f4f1f7bf4711de84107b1c6040a91b2b71847aa5f151a70149a5a43fdbb16fc" exitCode=1 Feb 16 21:22:50.023904 master-0 kubenswrapper[38936]: I0216 21:22:50.023877 38936 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="6ae1597534c852a1aae5585dadba4c16b6d817d6984c35ca98940b0dfe1fcd77" exitCode=0 Feb 16 21:22:50.023904 master-0 kubenswrapper[38936]: I0216 21:22:50.023904 38936 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="6dfa6b8d2b84acd49a7559619cbb2034fe2294937bd8d4e0f86679d02bd2078a" exitCode=0 Feb 16 21:22:50.029030 master-0 kubenswrapper[38936]: I0216 21:22:50.028958 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-8pqbl_302156cc-9dca-4a66-9e6a-ba2c7e738c92/control-plane-machine-set-operator/1.log" Feb 16 21:22:50.029030 master-0 kubenswrapper[38936]: I0216 21:22:50.028994 38936 generic.go:334] "Generic (PLEG): container finished" podID="302156cc-9dca-4a66-9e6a-ba2c7e738c92" containerID="cf5bd07d44ef1049857af620840ed7780e94db377ae50a689034fcd0589dd325" exitCode=1 Feb 16 21:22:50.030778 master-0 kubenswrapper[38936]: I0216 21:22:50.030756 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pc6x9_b1ac9776-54c4-46ce-b898-01c8cf35e593/snapshot-controller/5.log" Feb 16 21:22:50.030850 master-0 kubenswrapper[38936]: I0216 21:22:50.030791 38936 generic.go:334] "Generic (PLEG): container finished" podID="b1ac9776-54c4-46ce-b898-01c8cf35e593" containerID="473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6" exitCode=1 Feb 16 21:22:50.034923 master-0 kubenswrapper[38936]: I0216 21:22:50.034900 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-6fcf4c966-n4hfs_1b61063e-775e-421d-bf73-a6ef134293a0/network-operator/4.log" Feb 16 21:22:50.034992 master-0 kubenswrapper[38936]: I0216 21:22:50.034939 38936 generic.go:334] "Generic (PLEG): container finished" podID="1b61063e-775e-421d-bf73-a6ef134293a0" containerID="aab44606d671f216ff3793ef915c84f815301082904e4bc4a12b70d23d7c13c3" exitCode=1 Feb 16 21:22:50.036671 master-0 kubenswrapper[38936]: I0216 21:22:50.036623 38936 generic.go:334] "Generic (PLEG): container finished" podID="7d6eb694-9a3d-49d1-bbc1-74ba4450d673" containerID="35aeddbd3b02ea16608fbe6dfea1fa7dc35fe8b876f2fa1fba3cfd614e5815c0" exitCode=0 Feb 16 21:22:50.038196 master-0 kubenswrapper[38936]: I0216 21:22:50.038172 38936 generic.go:334] "Generic (PLEG): container finished" podID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerID="fa302e5e493b2dfa58bae20f0ca7e4cc187d6d95bf769b99faf796dd889e114f" exitCode=0 Feb 16 21:22:50.040388 master-0 kubenswrapper[38936]: I0216 21:22:50.040354 38936 generic.go:334] "Generic (PLEG): container finished" podID="ff193060-a272-4e4e-990a-83ac410f523d" containerID="f5d1b2f95d0f407ab1fdd5eb9fe9deae1b8e8d536d017cfe9a03861815d4f96a" exitCode=0 Feb 16 21:22:50.042073 master-0 kubenswrapper[38936]: I0216 21:22:50.042052 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-xwptz_8b648d9e-a892-4951-b0e2-fed6b16273d4/cluster-baremetal-operator/5.log" Feb 16 21:22:50.042340 master-0 kubenswrapper[38936]: I0216 21:22:50.042317 38936 generic.go:334] "Generic (PLEG): container finished" podID="8b648d9e-a892-4951-b0e2-fed6b16273d4" containerID="6a46714853e2a885d7f0ea06667526f3f7b240b0bd635da8d5cae43fd1dadc87" exitCode=1 Feb 16 21:22:50.044149 master-0 kubenswrapper[38936]: I0216 21:22:50.044128 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-pdjn4_5e062e07-8076-444c-b476-4eb2848e9613/cluster-olm-operator/2.log" Feb 16 21:22:50.050429 master-0 kubenswrapper[38936]: I0216 21:22:50.050392 38936 generic.go:334] "Generic (PLEG): container finished" podID="5e062e07-8076-444c-b476-4eb2848e9613" containerID="b805375f7b42f31b0863c18246ff6bd98c4c77aa1ad1eb2b469a42772d48301d" exitCode=255 Feb 16 21:22:50.050429 master-0 kubenswrapper[38936]: I0216 21:22:50.050417 38936 generic.go:334] "Generic (PLEG): container finished" podID="5e062e07-8076-444c-b476-4eb2848e9613" containerID="9949cb3f0ffb40ac03674e827a655fd8962fd631e7432c2ead34043e0e4d8864" exitCode=0 Feb 16 21:22:50.050429 master-0 kubenswrapper[38936]: I0216 21:22:50.050426 38936 generic.go:334] "Generic (PLEG): container finished" podID="5e062e07-8076-444c-b476-4eb2848e9613" containerID="30eb3e8a1a561e4df2b728e0e98a6145e2dd7a64784f0071e688e9e9f5cc6bbc" exitCode=0 Feb 16 21:22:50.051876 master-0 kubenswrapper[38936]: I0216 21:22:50.051851 38936 generic.go:334] "Generic (PLEG): container finished" podID="4cc1da27-6eaf-4177-b2d8-1546a9d94f90" containerID="b5c9ef27352d95c27da1fd4de0d350f8371e4f69cc5b84960004238d748e1ab6" exitCode=0 Feb 16 21:22:50.056392 master-0 kubenswrapper[38936]: I0216 21:22:50.056359 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7c6bdb986f-xbd96_59237aa6-6250-4619-8ee5-abae59f04b57/openshift-config-operator/6.log" Feb 16 21:22:50.056689 master-0 kubenswrapper[38936]: I0216 21:22:50.056668 38936 generic.go:334] "Generic (PLEG): container finished" podID="59237aa6-6250-4619-8ee5-abae59f04b57" containerID="0715c2c6bc16d3adc1361563ad51b4de11f77937d1f51eb61f3cd34b96856d0c" exitCode=137 Feb 16 21:22:50.056689 master-0 kubenswrapper[38936]: I0216 21:22:50.056687 38936 generic.go:334] "Generic (PLEG): container finished" podID="59237aa6-6250-4619-8ee5-abae59f04b57" containerID="61defc533791601dd8ff505e57b675aac367c1fe0144fefa77509ab84c3b3331" exitCode=0 Feb 16 21:22:50.061370 master-0 kubenswrapper[38936]: I0216 21:22:50.061322 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_b09d3c16-18e3-45b3-9d39-949d2464b300/installer/0.log" Feb 16 21:22:50.061519 master-0 kubenswrapper[38936]: I0216 21:22:50.061386 38936 generic.go:334] "Generic (PLEG): container finished" podID="b09d3c16-18e3-45b3-9d39-949d2464b300" containerID="ab3f1bdaa87534b4aa1ea4a058dea3457c695cfe1da23ed41ae2ee089315bd08" exitCode=1 Feb 16 21:22:50.063398 master-0 kubenswrapper[38936]: I0216 21:22:50.063374 38936 generic.go:334] "Generic (PLEG): container finished" podID="bd49e653-3b42-4950-8f5f-2b2ecb683678" containerID="68de2e1ab2cad0885d92d9f27ce9e9ae8699ab2a4e1f40736fffa8de720860f7" exitCode=0 Feb 16 21:22:50.067295 master-0 kubenswrapper[38936]: I0216 21:22:50.067229 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-67fd9768b5-557vd_1d7d0416-5f50-42bd-826b-92eecf9adcec/cluster-autoscaler-operator/0.log" Feb 16 21:22:50.072320 master-0 kubenswrapper[38936]: I0216 21:22:50.072258 38936 generic.go:334] "Generic (PLEG): container finished" podID="1d7d0416-5f50-42bd-826b-92eecf9adcec" containerID="2805492f11ff17f7e51a6fba30471dee89ec93e40bd6ce6db4b158be70c75964" exitCode=255 Feb 16 21:22:50.080145 master-0 kubenswrapper[38936]: I0216 21:22:50.080086 38936 generic.go:334] "Generic (PLEG): container finished" podID="484154d0-66c8-4d0e-bf1b-f48d0abfe628" containerID="784108aeefea86df821b8787cc4aa96e0a0d0b443e8ed52de36e36ad7f22bb5e" exitCode=0 Feb 16 21:22:50.081676 master-0 kubenswrapper[38936]: I0216 21:22:50.081620 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 21:22:50.082286 master-0 kubenswrapper[38936]: I0216 21:22:50.082259 38936 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="34cedb032f29de87a57c244cfdac89c6368a83bd489ea19dfd7e57624682d8a7" exitCode=1 Feb 16 21:22:50.082286 master-0 kubenswrapper[38936]: I0216 21:22:50.082279 38936 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="1c9bfe3aaee57fe250198f3484327052043637146bacc2e7c8dfb22afd3d4c6c" exitCode=0 Feb 16 21:22:50.088555 master-0 kubenswrapper[38936]: I0216 21:22:50.088529 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-qzs2g_1a986ba3-2aea-4133-a05b-f69d4e0d8d3b/manager/1.log" Feb 16 21:22:50.088974 master-0 kubenswrapper[38936]: I0216 21:22:50.088942 38936 generic.go:334] "Generic (PLEG): container finished" podID="1a986ba3-2aea-4133-a05b-f69d4e0d8d3b" containerID="073bfd97b3802cf7e422558b7f0d96ac1c7a887d6a785fb5000fa99850a0b06e" exitCode=1 Feb 16 21:22:50.090725 master-0 kubenswrapper[38936]: I0216 21:22:50.090698 38936 generic.go:334] "Generic (PLEG): container finished" podID="853452fb-1035-4f22-8aeb-9043d150e8ca" containerID="ffbe844a2ffc7eee14e6cfe4f85b6f3a2d4632e0cd257a400a32c1667a3dc025" exitCode=0 Feb 16 21:22:50.090725 master-0 kubenswrapper[38936]: I0216 21:22:50.090714 38936 generic.go:334] "Generic (PLEG): container finished" podID="853452fb-1035-4f22-8aeb-9043d150e8ca" containerID="8b2a92ef4f9f721811b4bae1b0d025f01e55ec1f259a078142245e8b2ab55dd5" exitCode=0 Feb 16 21:22:50.092600 master-0 kubenswrapper[38936]: I0216 21:22:50.092581 38936 generic.go:334] "Generic (PLEG): container finished" podID="d2501eec-47c8-47bc-b0c9-28d94c06075b" containerID="fac6599aca0de28d90bc133433b080122ce047275bd07a83287cf6be8f57463e" exitCode=0 Feb 16 21:22:50.094108 master-0 kubenswrapper[38936]: I0216 21:22:50.094088 38936 generic.go:334] "Generic (PLEG): container finished" podID="408a9364-3730-4017-b1e4-c85d6a504168" containerID="ec8ce2b77f9d3d1712f1d9e5d59ca2196200eb54635d01b0d1caf94494809751" exitCode=0 Feb 16 21:22:50.095823 master-0 kubenswrapper[38936]: I0216 21:22:50.095788 38936 generic.go:334] "Generic (PLEG): container finished" podID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" containerID="7e2db6d71a3ac7629c39a027759be84deb42e9801284908e0ecc941bc1381254" exitCode=0 Feb 16 21:22:50.100192 master-0 kubenswrapper[38936]: I0216 21:22:50.100171 38936 generic.go:334] "Generic (PLEG): container finished" podID="a5d4ac48-aed3-46b9-9b2a-d741121e05b4" containerID="22be26c79a1d2adc3db5f6e113ba92cfcf47f9a286ce35fb6273d18f0ea1545e" exitCode=0 Feb 16 21:22:50.102488 master-0 kubenswrapper[38936]: I0216 21:22:50.102458 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-676cd8b9b5-cbj2r_99ab949e-bd0d-45a7-95d1-8381d9f1f5f3/service-ca-controller/1.log" Feb 16 21:22:50.102539 master-0 kubenswrapper[38936]: I0216 21:22:50.102493 38936 generic.go:334] "Generic (PLEG): container finished" podID="99ab949e-bd0d-45a7-95d1-8381d9f1f5f3" containerID="11a0f236b15a97d8bb8db30a3ecfba40559eb738b2fbad78fcc9824a0ec8620e" exitCode=255 Feb 16 21:22:50.104148 master-0 kubenswrapper[38936]: I0216 21:22:50.104120 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-8kdgg_e8194cdc-3133-49e2-9579-a747c0bf2b16/manager/1.log" Feb 16 21:22:50.104711 master-0 kubenswrapper[38936]: I0216 21:22:50.104674 38936 generic.go:334] "Generic (PLEG): container finished" podID="e8194cdc-3133-49e2-9579-a747c0bf2b16" containerID="4f5444c17822db01691b9d03f3dd6a819e814eea7a63f23ec45ece42ea5fba62" exitCode=1 Feb 16 21:22:50.107579 master-0 kubenswrapper[38936]: I0216 21:22:50.107554 38936 generic.go:334] "Generic (PLEG): container finished" podID="ce229d27-837d-4a98-80fc-d56877ae39b8" containerID="88247333b19116719c02e3337d53469a84d7c4cf04c7843a9226ea683ea58eef" exitCode=0 Feb 16 21:22:50.107579 master-0 kubenswrapper[38936]: I0216 21:22:50.107572 38936 generic.go:334] "Generic (PLEG): container finished" podID="ce229d27-837d-4a98-80fc-d56877ae39b8" containerID="4417baf2be8cb2785a3116c10e495e124305a7b9a9021ca81984fe0912c3ccfa" exitCode=0 Feb 16 21:22:50.109212 master-0 kubenswrapper[38936]: I0216 21:22:50.109183 38936 generic.go:334] "Generic (PLEG): container finished" podID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerID="2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8" exitCode=0 Feb 16 21:22:50.110598 master-0 kubenswrapper[38936]: I0216 21:22:50.110580 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-v7xdv_4085413c-9af1-4d2a-ba0f-33b42025cb7f/csi-snapshot-controller-operator/2.log" Feb 16 21:22:50.110662 master-0 kubenswrapper[38936]: I0216 21:22:50.110609 38936 generic.go:334] "Generic (PLEG): container finished" podID="4085413c-9af1-4d2a-ba0f-33b42025cb7f" containerID="5bb447e9b562fe2a3fcb45b723cffb38257ea64157f142954fe58414909efdd3" exitCode=255 Feb 16 21:22:50.113624 master-0 kubenswrapper[38936]: I0216 21:22:50.113599 38936 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="0213e2c5badfad1c445275191896cc5e9028427f3090c086deb48f44170a8559" exitCode=0 Feb 16 21:22:50.113624 master-0 kubenswrapper[38936]: I0216 21:22:50.113619 38936 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="c4606e99d38ef423f540d128546208027e050c83b7e8385117d1ac9efe8a49dd" exitCode=0 Feb 16 21:22:50.113748 master-0 kubenswrapper[38936]: I0216 21:22:50.113628 38936 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="4c7a7e08f576cfd5e11632a9ba0076da03fa44265bff3bddab5c897154cfdd10" exitCode=0 Feb 16 21:22:50.113748 master-0 kubenswrapper[38936]: I0216 21:22:50.113638 38936 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="181fe628d311f1cd1061bd5a4ed240a9f0bc9297d01fb093f8d0beb40911a4e0" exitCode=0 Feb 16 21:22:50.113748 master-0 kubenswrapper[38936]: I0216 21:22:50.113669 38936 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="764147f0ae46dce8cfdba6d43c9720c0e223cc03d6732303325fb33cc0d7abd0" exitCode=0 Feb 16 21:22:50.113748 master-0 kubenswrapper[38936]: I0216 21:22:50.113682 38936 generic.go:334] "Generic (PLEG): container finished" podID="62935559-041f-4694-9d36-adc809d079b4" containerID="2485cbe452aed6f7043c33dccc17caa48675a3e464f4b79370075f51c4973793" exitCode=0 Feb 16 21:22:50.118135 master-0 kubenswrapper[38936]: I0216 21:22:50.118102 38936 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="30c3311ac2594f90ee07f133990bc2e498e9439d4db71f3e17a8742c175c7b4f" exitCode=0 Feb 16 21:22:50.118200 master-0 kubenswrapper[38936]: I0216 21:22:50.118151 38936 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="3478db789e9371b7e1a20de102750814fbff190dbf9776351e2f462d389fbe58" exitCode=0 Feb 16 21:22:50.118200 master-0 kubenswrapper[38936]: I0216 21:22:50.118164 38936 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="c4633b0b299cd40e037bf321ae06c8806fedc4001bb393b919fc921dc3fe2902" exitCode=0 Feb 16 21:22:50.119938 master-0 kubenswrapper[38936]: I0216 21:22:50.119914 38936 generic.go:334] "Generic (PLEG): container finished" podID="e9615af2-cad5-4705-9c2f-6f3c97026100" containerID="43a48a6592fa00c02a3165bc38965569bd23dac45b30b2fdc517303872a72e62" exitCode=0 Feb 16 21:22:50.121421 master-0 kubenswrapper[38936]: I0216 21:22:50.121402 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-kvhs4_065fcd43-1572-4152-b77b-a6b7ab52a081/machine-approver-controller/0.log" Feb 16 21:22:50.121758 master-0 kubenswrapper[38936]: I0216 21:22:50.121727 38936 generic.go:334] "Generic (PLEG): container finished" podID="065fcd43-1572-4152-b77b-a6b7ab52a081" containerID="577a19cb609733c40b24d16a4cfb15f4698079667a2b3110eeef59cec7643dff" exitCode=255 Feb 16 21:22:50.174734 master-0 kubenswrapper[38936]: E0216 21:22:50.174064 38936 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 21:22:50.574623 master-0 kubenswrapper[38936]: E0216 21:22:50.574545 38936 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 21:22:50.776722 master-0 kubenswrapper[38936]: I0216 21:22:50.776675 38936 apiserver.go:52] "Watching apiserver" Feb 16 21:22:50.799626 master-0 kubenswrapper[38936]: I0216 21:22:50.799558 38936 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 21:22:51.130080 master-0 kubenswrapper[38936]: I0216 21:22:51.129993 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_e300ec3a145c1339a627607b3c84b99d/kube-apiserver-check-endpoints/0.log" Feb 16 21:22:51.132206 master-0 kubenswrapper[38936]: I0216 21:22:51.132180 38936 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="43047bae0f2dd351891e082f8932168325d435e7cb25fa3bae528c469bde358f" exitCode=255 Feb 16 21:22:51.374974 master-0 kubenswrapper[38936]: E0216 21:22:51.374904 38936 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 21:22:52.975766 master-0 kubenswrapper[38936]: E0216 21:22:52.975712 38936 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 21:22:55.322698 master-0 kubenswrapper[38936]: E0216 21:22:55.322632 38936 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice/crio.service\": failed to get container info for \"/system.slice/crio.service\": unknown container \"/system.slice/crio.service\"" containerName="/system.slice/crio.service" Feb 16 21:22:55.323241 master-0 kubenswrapper[38936]: E0216 21:22:55.323211 38936 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice\": failed to get container info for \"/system.slice\": unknown container \"/system.slice\"" containerName="/system.slice" Feb 16 21:22:55.323890 master-0 kubenswrapper[38936]: E0216 21:22:55.323860 38936 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods.slice\": failed to get container info for \"/kubepods.slice\": unknown container \"/kubepods.slice\"" containerName="/kubepods.slice" Feb 16 21:22:56.172255 master-0 kubenswrapper[38936]: I0216 21:22:56.172174 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-k8h7h_3e3ccb9a-4a5d-4a04-8334-b1e303b215a5/kube-multus-additional-cni-plugins/0.log" Feb 16 21:22:56.172255 master-0 kubenswrapper[38936]: I0216 21:22:56.172248 38936 generic.go:334] "Generic (PLEG): container finished" podID="3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" containerID="3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335" exitCode=137 Feb 16 21:22:56.176642 master-0 kubenswrapper[38936]: E0216 21:22:56.176592 38936 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 21:23:01.177073 master-0 kubenswrapper[38936]: E0216 21:23:01.177003 38936 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 21:23:03.219086 master-0 kubenswrapper[38936]: I0216 21:23:03.219034 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-z46jt_b27de289-c0f9-47ff-aac6-15b7bc1b178a/multus-admission-controller/0.log" Feb 16 21:23:03.219575 master-0 kubenswrapper[38936]: I0216 21:23:03.219095 38936 generic.go:334] "Generic (PLEG): container finished" podID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" containerID="b6f9bd149e55332060a93dd1c773c869219679c9d52274540dd91f495e731934" exitCode=137 Feb 16 21:23:06.177545 master-0 kubenswrapper[38936]: E0216 21:23:06.177393 38936 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 21:23:06.689303 master-0 kubenswrapper[38936]: I0216 21:23:06.689264 38936 manager.go:324] Recovery completed Feb 16 21:23:06.772226 master-0 kubenswrapper[38936]: I0216 21:23:06.772040 38936 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 21:23:06.772226 master-0 kubenswrapper[38936]: I0216 21:23:06.772071 38936 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 21:23:06.772226 master-0 kubenswrapper[38936]: I0216 21:23:06.772090 38936 state_mem.go:36] "Initialized new in-memory state store" Feb 16 21:23:06.772532 master-0 kubenswrapper[38936]: I0216 21:23:06.772254 38936 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 16 21:23:06.772532 master-0 kubenswrapper[38936]: I0216 21:23:06.772266 38936 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 16 21:23:06.772532 master-0 kubenswrapper[38936]: I0216 21:23:06.772299 38936 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 16 21:23:06.772532 master-0 kubenswrapper[38936]: I0216 21:23:06.772306 38936 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 16 21:23:06.772532 master-0 kubenswrapper[38936]: I0216 21:23:06.772312 38936 policy_none.go:49] "None policy: Start" Feb 16 21:23:06.776452 master-0 kubenswrapper[38936]: I0216 21:23:06.776409 38936 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 21:23:06.776452 master-0 kubenswrapper[38936]: I0216 21:23:06.776449 38936 state_mem.go:35] "Initializing new in-memory state store" Feb 16 21:23:06.776790 master-0 kubenswrapper[38936]: I0216 21:23:06.776718 38936 state_mem.go:75] "Updated machine memory state" Feb 16 21:23:06.776790 master-0 kubenswrapper[38936]: I0216 21:23:06.776752 38936 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 16 21:23:06.794455 master-0 kubenswrapper[38936]: I0216 21:23:06.794252 38936 manager.go:334] "Starting Device Plugin manager" Feb 16 21:23:06.794455 master-0 kubenswrapper[38936]: I0216 21:23:06.794323 38936 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 21:23:06.794455 master-0 kubenswrapper[38936]: I0216 21:23:06.794341 38936 server.go:79] "Starting device plugin registration server" Feb 16 21:23:06.794835 master-0 kubenswrapper[38936]: I0216 21:23:06.794811 38936 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 21:23:06.794895 master-0 kubenswrapper[38936]: I0216 21:23:06.794835 38936 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 21:23:06.795221 master-0 kubenswrapper[38936]: I0216 21:23:06.795194 38936 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 21:23:06.795310 master-0 kubenswrapper[38936]: I0216 21:23:06.795284 38936 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 21:23:06.795310 master-0 kubenswrapper[38936]: I0216 21:23:06.795303 38936 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 21:23:06.895854 master-0 kubenswrapper[38936]: I0216 21:23:06.895778 38936 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 21:23:06.898147 master-0 kubenswrapper[38936]: I0216 21:23:06.898106 38936 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 21:23:06.898210 master-0 kubenswrapper[38936]: I0216 21:23:06.898161 38936 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 21:23:06.898210 master-0 kubenswrapper[38936]: I0216 21:23:06.898175 38936 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 21:23:06.898309 master-0 kubenswrapper[38936]: I0216 21:23:06.898289 38936 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 21:23:06.910635 master-0 kubenswrapper[38936]: I0216 21:23:06.910573 38936 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 16 21:23:06.911080 master-0 kubenswrapper[38936]: I0216 21:23:06.910700 38936 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 21:23:07.247255 master-0 kubenswrapper[38936]: I0216 21:23:07.247195 38936 generic.go:334] "Generic (PLEG): container finished" podID="c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee" containerID="7eb9d606c0ba4432a3c104c5bb2952f3efa3dee4e29f1c0d81a5b0db607ceac8" exitCode=0 Feb 16 21:23:11.178119 master-0 kubenswrapper[38936]: I0216 21:23:11.178022 38936 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:23:11.179325 master-0 kubenswrapper[38936]: I0216 21:23:11.179206 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6bdb76b9b7-z46x6","openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m","openshift-controller-manager/controller-manager-6998cd96fb-bgcb2","openshift-dns/node-resolver-zfldn","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-network-node-identity/network-node-identity-tpj6f","openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96","openshift-ingress-canary/ingress-canary-l44qd","openshift-multus/network-metrics-daemon-42bw7","openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d","openshift-service-ca/service-ca-676cd8b9b5-cbj2r","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4","openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl","openshift-etcd/etcd-master-0","openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d","openshift-marketplace/certified-operators-blw8x","openshift-monitoring/prometheus-operator-7485d645b8-9xc4n","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4","openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d","openshift-kube-apiserver/installer-1-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft","openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4","openshift-machine-config-operator/machine-config-daemon-jb6tl","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd","assisted-installer/assisted-installer-controller-6llwf","openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9","openshift-insights/insights-operator-cb4f7b4cf-h8f7q","openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m","openshift-multus/cni-sysctl-allowlist-ds-k8h7h","openshift-network-diagnostics/network-check-target-68c25","openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24","openshift-authentication-operator/authentication-operator-755d954778-8gnq5","openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf","openshift-marketplace/redhat-marketplace-sn2nh","openshift-monitoring/metrics-server-76c9c896c-pz2bk","openshift-multus/multus-admission-controller-7c64d55f8-z46jt","openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn","openshift-etcd/installer-1-master-0","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld","openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz","openshift-multus/multus-additional-cni-plugins-8zsx4","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv","openshift-ingress/router-default-864ddd5f56-z4bnk","openshift-kube-controller-manager/installer-2-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-dns/dns-default-7bbrn","openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl","openshift-ovn-kubernetes/ovnkube-node-z8h4n","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/installer-4-retry-1-master-0","openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg","openshift-cluster-version/cluster-version-operator-649c4f5445-n994s","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-apiserver/installer-1-retry-1-master-0","openshift-kube-controller-manager/installer-1-master-0","openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s","openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g","openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p","openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb","openshift-monitoring/node-exporter-ctvb2","openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z","openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq","openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9","openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz","openshift-etcd/installer-2-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn","openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw","openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g","openshift-cluster-node-tuning-operator/tuned-llsw4","openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd","openshift-marketplace/community-operators-j5kwc","openshift-marketplace/redhat-operators-69wj8","openshift-monitoring/kube-state-metrics-7cc9598d54-n467n","openshift-network-operator/iptables-alerter-b68cj","openshift-dns-operator/dns-operator-86b8869b79-cdltb","openshift-multus/multus-65zz6","openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6","openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8","openshift-machine-config-operator/machine-config-server-qvctv","openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq","openshift-multus/multus-admission-controller-6d678b8d67-shtrw","openshift-network-operator/network-operator-6fcf4c966-n4hfs","openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4"] Feb 16 21:23:11.179564 master-0 kubenswrapper[38936]: I0216 21:23:11.179510 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-6llwf" Feb 16 21:23:11.190269 master-0 kubenswrapper[38936]: I0216 21:23:11.190211 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 21:23:11.190603 master-0 kubenswrapper[38936]: I0216 21:23:11.190534 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 21:23:11.190603 master-0 kubenswrapper[38936]: I0216 21:23:11.190576 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 21:23:11.190791 master-0 kubenswrapper[38936]: I0216 21:23:11.190676 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 21:23:11.190983 master-0 kubenswrapper[38936]: I0216 21:23:11.190942 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.191701 master-0 kubenswrapper[38936]: I0216 21:23:11.191527 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 21:23:11.191701 master-0 kubenswrapper[38936]: I0216 21:23:11.191566 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.203499 master-0 kubenswrapper[38936]: I0216 21:23:11.203414 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 21:23:11.204587 master-0 kubenswrapper[38936]: I0216 21:23:11.204548 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 21:23:11.205071 master-0 kubenswrapper[38936]: I0216 21:23:11.205020 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 21:23:11.205274 master-0 kubenswrapper[38936]: I0216 21:23:11.205251 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 21:23:11.205380 master-0 kubenswrapper[38936]: I0216 21:23:11.205350 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 21:23:11.205556 master-0 kubenswrapper[38936]: I0216 21:23:11.205516 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 21:23:11.205694 master-0 kubenswrapper[38936]: I0216 21:23:11.205619 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 21:23:11.206057 master-0 kubenswrapper[38936]: I0216 21:23:11.206031 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 21:23:11.206176 master-0 kubenswrapper[38936]: I0216 21:23:11.206144 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:23:11.206273 master-0 kubenswrapper[38936]: I0216 21:23:11.206246 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 21:23:11.206446 master-0 kubenswrapper[38936]: I0216 21:23:11.206413 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 21:23:11.206446 master-0 kubenswrapper[38936]: I0216 21:23:11.206432 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 21:23:11.208494 master-0 kubenswrapper[38936]: I0216 21:23:11.206782 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 21:23:11.208494 master-0 kubenswrapper[38936]: I0216 21:23:11.206796 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.208494 master-0 kubenswrapper[38936]: I0216 21:23:11.207209 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 21:23:11.208494 master-0 kubenswrapper[38936]: I0216 21:23:11.207584 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 21:23:11.208494 master-0 kubenswrapper[38936]: I0216 21:23:11.207586 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 21:23:11.210690 master-0 kubenswrapper[38936]: I0216 21:23:11.210500 38936 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="48e014e9-b22b-4fb1-a1eb-c3f7420740ad" Feb 16 21:23:11.219383 master-0 kubenswrapper[38936]: I0216 21:23:11.218882 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 21:23:11.223794 master-0 kubenswrapper[38936]: I0216 21:23:11.221615 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 21:23:11.223794 master-0 kubenswrapper[38936]: I0216 21:23:11.222083 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 21:23:11.223794 master-0 kubenswrapper[38936]: I0216 21:23:11.222717 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 21:23:11.224615 master-0 kubenswrapper[38936]: I0216 21:23:11.224569 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 21:23:11.226022 master-0 kubenswrapper[38936]: I0216 21:23:11.225093 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 21:23:11.226022 master-0 kubenswrapper[38936]: I0216 21:23:11.225801 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.226022 master-0 kubenswrapper[38936]: I0216 21:23:11.225927 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.226108 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.226154 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.226292 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.226334 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.226367 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.226440 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.226570 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: E0216 21:23:11.226638 38936 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: E0216 21:23:11.226793 38936 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.226896 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.227024 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.227107 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.227427 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 21:23:11.227472 master-0 kubenswrapper[38936]: I0216 21:23:11.227502 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 21:23:11.229512 master-0 kubenswrapper[38936]: I0216 21:23:11.227752 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.229512 master-0 kubenswrapper[38936]: I0216 21:23:11.228182 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 21:23:11.229512 master-0 kubenswrapper[38936]: I0216 21:23:11.228347 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 21:23:11.229512 master-0 kubenswrapper[38936]: I0216 21:23:11.228444 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 21:23:11.229512 master-0 kubenswrapper[38936]: I0216 21:23:11.228546 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 21:23:11.229512 master-0 kubenswrapper[38936]: I0216 21:23:11.228612 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 21:23:11.229512 master-0 kubenswrapper[38936]: I0216 21:23:11.228691 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 21:23:11.229512 master-0 kubenswrapper[38936]: I0216 21:23:11.228722 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 21:23:11.229512 master-0 kubenswrapper[38936]: I0216 21:23:11.228918 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 21:23:11.229512 master-0 kubenswrapper[38936]: I0216 21:23:11.229395 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:23:11.229512 master-0 kubenswrapper[38936]: I0216 21:23:11.229513 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" Feb 16 21:23:11.234797 master-0 kubenswrapper[38936]: I0216 21:23:11.234759 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 21:23:11.234973 master-0 kubenswrapper[38936]: I0216 21:23:11.234844 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 21:23:11.248791 master-0 kubenswrapper[38936]: I0216 21:23:11.248709 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 21:23:11.249044 master-0 kubenswrapper[38936]: I0216 21:23:11.248949 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 21:23:11.249044 master-0 kubenswrapper[38936]: I0216 21:23:11.249002 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 21:23:11.249044 master-0 kubenswrapper[38936]: I0216 21:23:11.249010 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 21:23:11.250468 master-0 kubenswrapper[38936]: I0216 21:23:11.249715 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 21:23:11.250468 master-0 kubenswrapper[38936]: I0216 21:23:11.249930 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" Feb 16 21:23:11.250468 master-0 kubenswrapper[38936]: I0216 21:23:11.250039 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: E0216 21:23:11.251220 38936 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.251321 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.251457 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.251529 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: E0216 21:23:11.251640 38936 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.251870 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.251959 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252022 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252063 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252134 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252228 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252358 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252444 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252509 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252534 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252575 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252668 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252732 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252814 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252883 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.252980 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.253258 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.253777 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.253881 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 21:23:11.254626 master-0 kubenswrapper[38936]: I0216 21:23:11.254320 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255096 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255293 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" event={"ID":"ebeb6876-0438-4961-a62a-68b41a676f17","Type":"ContainerDied","Data":"ba4091698915c4aa641aec2c8b4b82e0a58aec68f9f33e7955121f8e822a443d"} Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255358 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-fgr2n"] Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255569 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc3abc9-3012-43bd-af84-fc65baf82801" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255590 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc3abc9-3012-43bd-af84-fc65baf82801" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255624 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d416d98-ee7c-4481-9721-861ccd91685d" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255638 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d416d98-ee7c-4481-9721-861ccd91685d" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255679 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255694 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255713 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerName="assisted-installer-controller" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255726 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerName="assisted-installer-controller" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255757 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1677883f-bae2-4b6e-9dfe-683a6d26f2c5" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255771 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="1677883f-bae2-4b6e-9dfe-683a6d26f2c5" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255784 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf5e26c-84a2-45c6-b7dc-ee96dad23175" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255796 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf5e26c-84a2-45c6-b7dc-ee96dad23175" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255817 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc1da27-6eaf-4177-b2d8-1546a9d94f90" containerName="collect-profiles" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255831 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc1da27-6eaf-4177-b2d8-1546a9d94f90" containerName="collect-profiles" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255851 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cecc93e-bb0e-47da-903f-d0b63cce2b0d" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255863 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cecc93e-bb0e-47da-903f-d0b63cce2b0d" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255875 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255887 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255905 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09d3c16-18e3-45b3-9d39-949d2464b300" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255916 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09d3c16-18e3-45b3-9d39-949d2464b300" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255929 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebeb6876-0438-4961-a62a-68b41a676f17" containerName="collect-profiles" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255940 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebeb6876-0438-4961-a62a-68b41a676f17" containerName="collect-profiles" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: E0216 21:23:11.255967 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.255978 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256144 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256164 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d416d98-ee7c-4481-9721-861ccd91685d" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256179 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="700bc24c-4b00-44f0-90b0-aa555fe5c7a8" containerName="assisted-installer-controller" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256196 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256225 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc1da27-6eaf-4177-b2d8-1546a9d94f90" containerName="collect-profiles" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256248 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebeb6876-0438-4961-a62a-68b41a676f17" containerName="collect-profiles" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256263 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b09d3c16-18e3-45b3-9d39-949d2464b300" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256292 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="1677883f-bae2-4b6e-9dfe-683a6d26f2c5" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256312 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256339 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cecc93e-bb0e-47da-903f-d0b63cce2b0d" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256361 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf5e26c-84a2-45c6-b7dc-ee96dad23175" containerName="installer" Feb 16 21:23:11.256597 master-0 kubenswrapper[38936]: I0216 21:23:11.256381 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc3abc9-3012-43bd-af84-fc65baf82801" containerName="installer" Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.256792 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b" event={"ID":"ebeb6876-0438-4961-a62a-68b41a676f17","Type":"ContainerDied","Data":"83b0c5c9f9e9a6aa803d0e80eca18b14e4ab78d1317a06af8dc1b57da3bbd755"} Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.256822 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83b0c5c9f9e9a6aa803d0e80eca18b14e4ab78d1317a06af8dc1b57da3bbd755" Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.256841 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" event={"ID":"88c9d2fb-763f-4405-8d1a-c39039b41d3b","Type":"ContainerStarted","Data":"356615340d1fa734068744b665275fc799de6e0bdf17935887ae6dfbf7e33582"} Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.256861 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" event={"ID":"88c9d2fb-763f-4405-8d1a-c39039b41d3b","Type":"ContainerStarted","Data":"63dcd78e336e54b7c9dc9ab869c711c8a78fc93da330b9932ed7c66703f025a1"} Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.256884 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-fgr2n"] Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.256903 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" event={"ID":"88c9d2fb-763f-4405-8d1a-c39039b41d3b","Type":"ContainerStarted","Data":"acec58956615bf5fc5d4c728869e591e541d368aa9b045c7975cb5d8c938ff55"} Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.256921 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" event={"ID":"03a5021d-8a5c-4011-a9f9-c5eb38d5f236","Type":"ContainerStarted","Data":"79d00e7b83c00540b1c5d773a69fad9f225b26adf1e1722c924d805403fdfa8f"} Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.256941 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" event={"ID":"03a5021d-8a5c-4011-a9f9-c5eb38d5f236","Type":"ContainerStarted","Data":"82cd9aa58410168c822720c80bd115f16de52bc6d9131fe728eb5bdd7b5e78b0"} Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.256960 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" event={"ID":"03a5021d-8a5c-4011-a9f9-c5eb38d5f236","Type":"ContainerStarted","Data":"cc46ef0ea78121e3debb45555162f099169024a83053e72fed30ccbe4c22554d"} Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.256977 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7bbrn" event={"ID":"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf","Type":"ContainerStarted","Data":"9e2fd78f2965e851d3f9a8c562693cb34badc6c1a0ecf3b6d8362a8e34893103"} Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.257033 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7bbrn" event={"ID":"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf","Type":"ContainerStarted","Data":"a3c9bdb5c46c570dfeafe9033e115957d8dc64e9abc1e952434f1790e1d55ed5"} Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.257123 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp"] Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.257409 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.260051 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.260131 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.260169 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.260347 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.260458 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.260831 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.261043 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.261110 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 21:23:11.264128 master-0 kubenswrapper[38936]: I0216 21:23:11.261191 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 21:23:11.265809 master-0 kubenswrapper[38936]: I0216 21:23:11.264536 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 21:23:11.265809 master-0 kubenswrapper[38936]: I0216 21:23:11.264778 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 21:23:11.265809 master-0 kubenswrapper[38936]: I0216 21:23:11.264847 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 21:23:11.265809 master-0 kubenswrapper[38936]: I0216 21:23:11.264871 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 21:23:11.271909 master-0 kubenswrapper[38936]: I0216 21:23:11.271861 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 21:23:11.272491 master-0 kubenswrapper[38936]: I0216 21:23:11.272447 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp"] Feb 16 21:23:11.272558 master-0 kubenswrapper[38936]: I0216 21:23:11.272495 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-7bbrn" event={"ID":"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf","Type":"ContainerStarted","Data":"0334ad8c418e31c648e8c938f60c3ae9cf4f68761e776bef5ada2bade3f88833"} Feb 16 21:23:11.272558 master-0 kubenswrapper[38936]: I0216 21:23:11.272535 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" event={"ID":"2e618c5c-52be-4b52-b426-b92555dee9de","Type":"ContainerStarted","Data":"89713a48ebda2d81dc73c8e6307d140eac3f186d0e349480425338bd881c9d90"} Feb 16 21:23:11.272724 master-0 kubenswrapper[38936]: I0216 21:23:11.272559 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 21:23:11.272724 master-0 kubenswrapper[38936]: I0216 21:23:11.272602 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" event={"ID":"2e618c5c-52be-4b52-b426-b92555dee9de","Type":"ContainerStarted","Data":"d306354fd5d2178f348beb7a119f77d313ccc80e6928076b9869dfc8a33d0edf"} Feb 16 21:23:11.272724 master-0 kubenswrapper[38936]: I0216 21:23:11.272628 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7fc3abc9-3012-43bd-af84-fc65baf82801","Type":"ContainerDied","Data":"7705ab1783cfe260a257da3d99d4c43b8aa6602286bbd8b5854c2a525ae4f204"} Feb 16 21:23:11.272724 master-0 kubenswrapper[38936]: I0216 21:23:11.272674 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7fc3abc9-3012-43bd-af84-fc65baf82801","Type":"ContainerDied","Data":"e18212da3ba9255cc13862af9e868f85f8caf8c7478800353ac7a39fbc390fa8"} Feb 16 21:23:11.272724 master-0 kubenswrapper[38936]: I0216 21:23:11.272694 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18212da3ba9255cc13862af9e868f85f8caf8c7478800353ac7a39fbc390fa8" Feb 16 21:23:11.272724 master-0 kubenswrapper[38936]: I0216 21:23:11.272715 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" event={"ID":"684a8167-6c5b-430f-979e-307e58487611","Type":"ContainerStarted","Data":"05dd664dbe24b23e49df336a132aa75287844fdfc867ac2f9b9486c0cca53e74"} Feb 16 21:23:11.273030 master-0 kubenswrapper[38936]: I0216 21:23:11.272731 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" event={"ID":"684a8167-6c5b-430f-979e-307e58487611","Type":"ContainerStarted","Data":"f4d30cfe8bb36366ad4695d85f303021c475d8a0ec5ee46e2609d8eb9859e8ea"} Feb 16 21:23:11.273030 master-0 kubenswrapper[38936]: I0216 21:23:11.272748 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" event={"ID":"684a8167-6c5b-430f-979e-307e58487611","Type":"ContainerStarted","Data":"f94d68e1b5a31fd6ac38d04b76b6e3ee908e79aa67afc23e7d2bf54001deb6f0"} Feb 16 21:23:11.273030 master-0 kubenswrapper[38936]: I0216 21:23:11.272766 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" event={"ID":"ec7dd4ea-a139-45d4-96a4-506da1567292","Type":"ContainerStarted","Data":"0af302812fd66c922e290b0e4c9c4e2ba2f2caf5d12a5744d3fbf47817459c17"} Feb 16 21:23:11.273030 master-0 kubenswrapper[38936]: I0216 21:23:11.272792 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" event={"ID":"ec7dd4ea-a139-45d4-96a4-506da1567292","Type":"ContainerStarted","Data":"b2fa0e56a1525a9dc4cb1eed44cc6376b6ac0d1c2fab2be1bd2cb007a4f90f8a"} Feb 16 21:23:11.273030 master-0 kubenswrapper[38936]: I0216 21:23:11.272808 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" event={"ID":"f7b30888-5994-4968-9db6-9533ac60c92e","Type":"ContainerStarted","Data":"017b5416f64a5dc2aea1499757bc37cb7845a0c20f820608b04adf898a0fbb42"} Feb 16 21:23:11.273030 master-0 kubenswrapper[38936]: I0216 21:23:11.272827 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" event={"ID":"f7b30888-5994-4968-9db6-9533ac60c92e","Type":"ContainerStarted","Data":"9022c7d25901706a3a4753f177445a986f505ff90538968ff9843de9d6c65ab8"} Feb 16 21:23:11.273030 master-0 kubenswrapper[38936]: I0216 21:23:11.272846 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" event={"ID":"f7b30888-5994-4968-9db6-9533ac60c92e","Type":"ContainerStarted","Data":"9304d668e7785195dde35507d3b853217dd541218a54b7914dda3723dea0b360"} Feb 16 21:23:11.273030 master-0 kubenswrapper[38936]: I0216 21:23:11.272993 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" event={"ID":"f7b30888-5994-4968-9db6-9533ac60c92e","Type":"ContainerStarted","Data":"98ea530a3e85a55d27f014bb670a7b7e4444aedc192a8b2618c4f1830394b65c"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273045 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerStarted","Data":"f967af0fcd187eeafd04691b96ae014e22fb86716fe0ba66d9ce5f55dd5c8b91"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273064 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerDied","Data":"4b9eed56cd9de27df8732f0bf589198f3bec398bab1ee5d8d5d4047198bdc2b3"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273091 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" event={"ID":"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e","Type":"ContainerStarted","Data":"db18d33d279edf734f31d955c318fccdcbf15241593b0786bf92a199ab2a428f"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273111 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"3d416d98-ee7c-4481-9721-861ccd91685d","Type":"ContainerDied","Data":"8bbcb4e0fb94b168b2c18c0ad45486fda3e89c4340348d1ee5d8cea24b562c67"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273131 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"3d416d98-ee7c-4481-9721-861ccd91685d","Type":"ContainerDied","Data":"363e6d9151e8f74d699facea1b9fd8436a80e76af370ce89bfd959fd35f30873"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273147 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="363e6d9151e8f74d699facea1b9fd8436a80e76af370ce89bfd959fd35f30873" Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273163 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerStarted","Data":"b759be244b2ba22ad1884f9e0274ee8a722d66b1e8a5b2b9389cb48c9ae341b5"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273180 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerDied","Data":"abce7c467580f27265b653bd89f53e6e0d6413f3687b039b9f58c8dd18d3f0ce"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273197 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" event={"ID":"695549c8-d1fc-429d-9c9f-0a5915dc6074","Type":"ContainerStarted","Data":"b3fc27d6f88f12abb0f4db12508672dcd9584ab10707e7cd6f06dcebac1bbaa8"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273223 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-llsw4" event={"ID":"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64","Type":"ContainerStarted","Data":"5884bfcb6287b88109ccc8e0fa31ce71e568dd6b555e6cc855d0ca5064eb69cf"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273242 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-llsw4" event={"ID":"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64","Type":"ContainerStarted","Data":"8dea330d1b36a07c27afdc45034426f3e213a02e1b037be44563d4a3b9efc359"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273258 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"0cecc93e-bb0e-47da-903f-d0b63cce2b0d","Type":"ContainerDied","Data":"8df27f209e925f58d0b4923f79cdb9bec01f45d38cbc22684566e7e609148bab"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273278 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"0cecc93e-bb0e-47da-903f-d0b63cce2b0d","Type":"ContainerDied","Data":"5957534d0a5a6e1efe8a36af49bc53825aaeb991657eddb8f9392f7c762a0cd8"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273294 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5957534d0a5a6e1efe8a36af49bc53825aaeb991657eddb8f9392f7c762a0cd8" Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273310 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zfldn" event={"ID":"34743ce3-5eda-4c60-99cb-640dd067ebdf","Type":"ContainerStarted","Data":"43f0dafaf40b3911a88955e81edf78115668a44abe374303b3f2243aa138791a"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273326 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zfldn" event={"ID":"34743ce3-5eda-4c60-99cb-640dd067ebdf","Type":"ContainerStarted","Data":"cb7c3bcdaae372d84aa4e8a539ce094d23c02279631a56da69b150d86b62b5a5"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273350 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"1f8a26db-5a90-4da9-9074-33256ef17100","Type":"ContainerDied","Data":"f3ca6870e03df61b2f0b4d124dc1734d96c0b5c71852fc980d271a8f385f1958"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273369 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"1f8a26db-5a90-4da9-9074-33256ef17100","Type":"ContainerDied","Data":"84e6aa889c12b8f7b2d22b8b4cf46eee861623c6ee8d3fefb323875fd5efaa27"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273383 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84e6aa889c12b8f7b2d22b8b4cf46eee861623c6ee8d3fefb323875fd5efaa27" Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273397 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qvctv" event={"ID":"913951bb-1702-4b71-862c-a166bc7a62e0","Type":"ContainerStarted","Data":"9774080b01608f0a21e73d69c46adab19d9597a4bd78784da71dd2c1e0272836"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273421 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qvctv" event={"ID":"913951bb-1702-4b71-862c-a166bc7a62e0","Type":"ContainerStarted","Data":"404fdd69be202f40aeca377d1ba146b346077a53f8e7897ed4e324403366c1bf"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273438 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"2cf5e26c-84a2-45c6-b7dc-ee96dad23175","Type":"ContainerDied","Data":"912bdb89c47c0c84a626b5915d0082c84d6ad6cfcb759d646e64bf4849456d1f"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273456 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"2cf5e26c-84a2-45c6-b7dc-ee96dad23175","Type":"ContainerDied","Data":"530378b0633d960adbb9dbb3d961b5d62ae93d6f5ce44d7b8788383b67a4c0a0"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273479 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="530378b0633d960adbb9dbb3d961b5d62ae93d6f5ce44d7b8788383b67a4c0a0" Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273495 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" event={"ID":"aa2e9bbc-3962-45f5-a7cc-2dc059409e70","Type":"ContainerStarted","Data":"e121208e065bd981ec8f120b4bddfef2011a7578aefea2e29754d83b50431d3d"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273511 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" event={"ID":"aa2e9bbc-3962-45f5-a7cc-2dc059409e70","Type":"ContainerDied","Data":"86b2625e01e86e20ad843cc517b662e8d0574773dfe24c22fbbf50abc8c0ea7f"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273528 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" event={"ID":"aa2e9bbc-3962-45f5-a7cc-2dc059409e70","Type":"ContainerStarted","Data":"e1d55dfca25559f503e3ffffa2f5f036874c5ff002f21e1743ae94ece4a5c2a9"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273545 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" event={"ID":"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd","Type":"ContainerStarted","Data":"2386de6d7e3957c25a5bbdd2f9defa96eb2766f1baca6f041fdfd46d769c8ff9"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273562 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" event={"ID":"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd","Type":"ContainerStarted","Data":"75dd3f7c4a14726f013a3bf4f169a8056c56991ba5e679317594055334246207"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273577 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" event={"ID":"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd","Type":"ContainerStarted","Data":"cedd6b186b2f683612167b71883ce9d5bac09eb1edd2f0cb1e7e8286188d3035"} Feb 16 21:23:11.273568 master-0 kubenswrapper[38936]: I0216 21:23:11.273600 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" event={"ID":"a0b7a368-1408-4fc3-ae25-4613b74e7fca","Type":"ContainerStarted","Data":"bdd8652a441643f0683ae4b00f3e1deedc584be862f8396218f05b664f2dabba"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273616 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" event={"ID":"a0b7a368-1408-4fc3-ae25-4613b74e7fca","Type":"ContainerStarted","Data":"0072cc6faa68db02c6729fe365e61ad88f628eb88cc1288a9c6b0491a85473a4"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273632 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" event={"ID":"a0b7a368-1408-4fc3-ae25-4613b74e7fca","Type":"ContainerStarted","Data":"a99765f7253d989ecd2ebab9422f8bd50f36c587e8b7eca1057d0e88a540b814"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273668 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"655a7675b66526d164c70e9b200b05c778827418e4c84a28b4e335f8dfc72ff8"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273688 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"3ded81fde954498e8c659f19e567426ed192fd804a885a7ac139c978535050d2"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273706 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"c630a4ba806244d201f002a158513dc016fe5c3b6daba273e1f23f6333686b88"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273728 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"7939368a66df752fec666f55357f94fd22b560b8a120e0b62d09790f086413b5"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273746 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"a3f154ed1fbadfde6c14b6c55646e156b1487b3d1f2a2888af5abc441cb159f2"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273762 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"c109743f9e34f4b558b3bf44dfa939dc541314f56b2be407503cf4a64de5777a"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273777 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"a6cd57c4b0fc1e7c2d930e1dc1ce1a766a873d8f44fcc9636a87f988589d8813"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273793 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"4f1cce04d21916f0d92a98ba8e6b09901027aaa8cc2b129f507dcc8d25ef4a4d"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273809 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerDied","Data":"d7022d510b5111f523030386d2b2e3f81b8551ed9e8be0ecf6a80ac34378ca5e"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273831 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" event={"ID":"69785167-b4ae-415b-bdcb-029f62effe78","Type":"ContainerStarted","Data":"9e9fb9a8fc61dba0936cd38d7b843d3efbdecc6ba9ec73f7423569f6305a4740"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273847 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" event={"ID":"4b035e85-b2b0-4dee-bb86-3465fc4b98a8","Type":"ContainerStarted","Data":"a6a2fb20def4cbde7b9bb47cdfdc79049f26b1950e4d47cb988ac8e11854652c"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273864 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" event={"ID":"4b035e85-b2b0-4dee-bb86-3465fc4b98a8","Type":"ContainerDied","Data":"fa5e5b86ee6d022e914514c6e1b9bc40b0ded23b4d78a78dbc84ca8df5d3a2bd"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273880 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" event={"ID":"4b035e85-b2b0-4dee-bb86-3465fc4b98a8","Type":"ContainerStarted","Data":"74bcb9c0e0e6190f4682d2a1f22029d9499551420f56ffed526a997deaabbd90"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273896 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" event={"ID":"4b035e85-b2b0-4dee-bb86-3465fc4b98a8","Type":"ContainerStarted","Data":"d731a0126023b327423b0d92ac9091c1188b42fa4686eb6ad7cba3b766448624"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273916 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" event={"ID":"e7adbe32-b8b9-438e-a2e3-f93146a97424","Type":"ContainerStarted","Data":"47333b4dc4c4506a75d09ea9dbae2fc9aaa9a5e9656c7290cd679c62408950cd"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273937 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" event={"ID":"e7adbe32-b8b9-438e-a2e3-f93146a97424","Type":"ContainerDied","Data":"6a7d7b13e17869969e9d31d79faa72dfb3a8d8453f67a2323e3dc0a1300a1e65"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273965 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" event={"ID":"e7adbe32-b8b9-438e-a2e3-f93146a97424","Type":"ContainerStarted","Data":"105b1eab12eec1f672058dc0900e8488b8bcca272b3ac3b2441b242d73128d7a"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.273982 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"1677883f-bae2-4b6e-9dfe-683a6d26f2c5","Type":"ContainerDied","Data":"b251b8636a6a11ccf532a9af9a8852c95e1a7cdd48031754c8a88d40620a2450"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274000 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"1677883f-bae2-4b6e-9dfe-683a6d26f2c5","Type":"ContainerDied","Data":"7f9adda37238ede86f88cbac2c999b2aa463809256c6a93ac9e769608706a215"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274014 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f9adda37238ede86f88cbac2c999b2aa463809256c6a93ac9e769608706a215" Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274030 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" event={"ID":"9e0227bc-63f5-48be-95dc-1323a2b2e327","Type":"ContainerStarted","Data":"f0f2142d7c75b9cb3d050ab9fd78b4ffcf397bc951f0081263a6ec6726c5bac7"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274047 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" event={"ID":"9e0227bc-63f5-48be-95dc-1323a2b2e327","Type":"ContainerDied","Data":"a7330b931340d1be5dba0fd54e8b246009c00f6e813142a46ee5264b4ff67461"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274065 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" event={"ID":"9e0227bc-63f5-48be-95dc-1323a2b2e327","Type":"ContainerStarted","Data":"0855efbb779255fb187bac22b944f8f2035fd58838e6517844db44571c397aae"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274091 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerDied","Data":"cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274111 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"d15df1caa93fcce85a632cd318aaf9104964d846efd2e5a897c570b4ebb61cb3"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274130 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"99134c6775f2c1522a1480fdf36e455e0ea6704e4324711468efadafd1a4b744"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274147 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" event={"ID":"fb1eac23-18a5-4706-adcd-81d83e04cd12","Type":"ContainerStarted","Data":"c28f67ef999b31c369d4692770123408f63a141b8851d50df01e2ab0b1a89e5e"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274164 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" event={"ID":"fb1eac23-18a5-4706-adcd-81d83e04cd12","Type":"ContainerStarted","Data":"8fffe565463ba118729e6d7e82e27ca24bae5e89a802ccdfc1edf0108bcb41ce"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274213 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" event={"ID":"fb1eac23-18a5-4706-adcd-81d83e04cd12","Type":"ContainerStarted","Data":"6caed68f3fc79ebb1ed9e5bfd3e9f6a4bad90b8a5cdeab5884b6fd52a2305c16"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274238 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerStarted","Data":"d5f7b3fcfb5c9f94add7386a8d0fa1915b7e46a3ef046408fb3358fa3cd8f9a5"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274299 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerDied","Data":"220f76e0bb64fd419313cb573cd97bbb54f9d2b5998f9525c7d9045abc13cfb5"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274319 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" event={"ID":"c7333319-3fe6-4b3f-b600-6b6df49fcaff","Type":"ContainerStarted","Data":"d84a6211eba3f66c2ce7e68ab1344f23f51a23b55442aa18fdabbc1b25bc9adb"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274337 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" event={"ID":"230d9624-2d9d-4036-967b-b530347f05d5","Type":"ContainerStarted","Data":"28726678c7ba973c7a8d12bd4e7dd23ac1f0cc7291e6d51f4f07e0ddb5f2952b"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274355 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" event={"ID":"230d9624-2d9d-4036-967b-b530347f05d5","Type":"ContainerStarted","Data":"c6a10327cd99b8e79080c80497f813f0d306c1ac1675a6ef75f827c739b664b0"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274370 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" event={"ID":"230d9624-2d9d-4036-967b-b530347f05d5","Type":"ContainerStarted","Data":"e5e5cf205d35c77f7135aae32a2a2b5d93190fd24142a46403057a66617d7317"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274388 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" event={"ID":"230d9624-2d9d-4036-967b-b530347f05d5","Type":"ContainerStarted","Data":"edc9559c5a629f79661ac5fd3b656fc66e5b478f6eb97f32c266188a17c0e747"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274411 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" event={"ID":"ba294358-051a-4f09-b182-710d3d6778c5","Type":"ContainerStarted","Data":"7e9f03ac4e3d4bf6f1a92c87252a343c03624e9e2d9c4c0aa92f759bfcd3bf24"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274428 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" event={"ID":"ba294358-051a-4f09-b182-710d3d6778c5","Type":"ContainerDied","Data":"c7880afa219acb0ac5e4138682f8fc8b3e3931790fad2a804808d6e2f5933f3f"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274445 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" event={"ID":"ba294358-051a-4f09-b182-710d3d6778c5","Type":"ContainerStarted","Data":"aed7b29fd5a17d326bf662963e39c91ff6d183ab7d2ccddb9bff04832a578f45"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274465 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" event={"ID":"ba294358-051a-4f09-b182-710d3d6778c5","Type":"ContainerStarted","Data":"ad196ac4d2e3966bfb26599fb699f9a38a58beb4f2a551485dd0f16fe14d30d3"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274482 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-68c25" event={"ID":"0d903d23-8e0b-424b-bcd0-e0a00f306e49","Type":"ContainerStarted","Data":"63057ac92dec2fa9c7d10c67e0bccd3d3eb946a1626faaf9fb6f3de715241845"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274499 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-68c25" event={"ID":"0d903d23-8e0b-424b-bcd0-e0a00f306e49","Type":"ContainerStarted","Data":"dbf32b84ea4131f980c7517f9adf09ab0debbea21b7d7312f8107de5103e23bd"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274524 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-b68cj" event={"ID":"d9d71a7a-a751-4de4-9c76-9bac85fe0177","Type":"ContainerStarted","Data":"905fc5a621203c91395d6216f060ca53794b0ecb7785c24aec6c41ecccc20912"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274542 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-b68cj" event={"ID":"d9d71a7a-a751-4de4-9c76-9bac85fe0177","Type":"ContainerStarted","Data":"abcd1a63f33b879c154e1f80fc5ea3f4b46d9d1e7d2159b6ce5ac662b670e5ff"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274559 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91a4c15bb67084035c73bb065892be1c9d73ba9204c94c99f7433a6c3008aaff" Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274574 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-tpj6f" event={"ID":"88f19cea-60ed-4977-a906-75deec51fc3d","Type":"ContainerStarted","Data":"d9983e5644ba5577e1eefab6fb7488cd7e2a9580d6b33554cb3e17eb89d03fd5"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274589 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-tpj6f" event={"ID":"88f19cea-60ed-4977-a906-75deec51fc3d","Type":"ContainerDied","Data":"035e7d01b329ab00b5fb0dd3b6a5b55ee6bd504dee86517456bdcc1b06cd6e19"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274607 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-tpj6f" event={"ID":"88f19cea-60ed-4977-a906-75deec51fc3d","Type":"ContainerStarted","Data":"f0fb0335aec7d732c2c504647e8162c4e320963f1f173436478e3f5209ced684"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274622 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-tpj6f" event={"ID":"88f19cea-60ed-4977-a906-75deec51fc3d","Type":"ContainerStarted","Data":"76e543cc5345eb5c53417c9f0b565400b03593c03aa3a1637483c029bb868ef3"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274645 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69wj8" event={"ID":"d8d648c7-b84b-4f43-84c9-903aead0891a","Type":"ContainerStarted","Data":"86c3b8b66a0663232311a42e0fdf88ea8134666f5448e623a713c72172a5c7cb"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274683 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69wj8" event={"ID":"d8d648c7-b84b-4f43-84c9-903aead0891a","Type":"ContainerDied","Data":"8510067c1b5f7cbc40f7c23faf036a1b9404f3ea036ff9582a8f6c06389e7238"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274700 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69wj8" event={"ID":"d8d648c7-b84b-4f43-84c9-903aead0891a","Type":"ContainerDied","Data":"fa3ed852335cb1ddfb20c47ba698ccaa6874c674cd87c8ada57d89856c7d37fd"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274717 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-69wj8" event={"ID":"d8d648c7-b84b-4f43-84c9-903aead0891a","Type":"ContainerStarted","Data":"385456702c716ef5052af7ff4f8c1f6423867ff9037ec0352d3bef2843cc7641"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274733 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerStarted","Data":"472d6ea4b832d6dda5b947964aa6ee6e541f575109f7f54f510a3c8f6075fe63"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274749 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerDied","Data":"cbff59f9a87f22154ac16be0a1fd4153598047d145747da8c5ad418b6de5b9ba"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274774 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" event={"ID":"27c20f63-9bfb-4703-94d5-0c65475e08d1","Type":"ContainerStarted","Data":"4ff1d9141076f81759691d94a098009541c5d2c236ef8864f1522766d2980580"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274791 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" event={"ID":"3403d2bf-b093-4f2e-80aa-73a3d6bcaffb","Type":"ContainerStarted","Data":"9baa14160e479c5229671fa47f287578de3e20925684ba77f76de501a6cd0a4b"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274809 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" event={"ID":"3403d2bf-b093-4f2e-80aa-73a3d6bcaffb","Type":"ContainerStarted","Data":"75d47673076de0f457cf43f09abae17f313fa42a6b18d0c5e8749dffb9564806"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274825 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" event={"ID":"2ab0a907-7abe-4808-ba21-bdda1506eae2","Type":"ContainerStarted","Data":"957672c63eb7430bfeb7424cf0d3c859bba34c6e865fdeff7ddd7689e1cdc21a"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274841 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" event={"ID":"2ab0a907-7abe-4808-ba21-bdda1506eae2","Type":"ContainerDied","Data":"715050d13195531641370ad04c7754b8cef8bb72e0896de25aaafb35a02054c9"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274859 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" event={"ID":"2ab0a907-7abe-4808-ba21-bdda1506eae2","Type":"ContainerStarted","Data":"2dfa08dcecf95c49e6db650a7dbdf117c27ed644f23ff4e264133dd36a509d3c"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274882 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"43047bae0f2dd351891e082f8932168325d435e7cb25fa3bae528c469bde358f"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274898 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274914 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274930 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274945 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274960 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerDied","Data":"8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.274979 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"62b7693910cb02952d8855d0ec6b5ec30d5524abd40344dea37279d475bce731"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275001 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" event={"ID":"70d217a9-86b7-47b9-a7da-9ac920b9c7c2","Type":"ContainerStarted","Data":"4ac247b9876f21c966ab93ed72aa48642f97c92d4ad20edb90a8d4785ced5367"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275016 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" event={"ID":"70d217a9-86b7-47b9-a7da-9ac920b9c7c2","Type":"ContainerDied","Data":"316bcd2b73e15fab60d8618d92eb77f101f2f53e423adb64b0f374a1f7fcda3a"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275032 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" event={"ID":"70d217a9-86b7-47b9-a7da-9ac920b9c7c2","Type":"ContainerStarted","Data":"d1ce8d9ee7cab12610683fbe9731b9ea4f3d71878c552326acd5722dd5f1b61a"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275048 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"72ee9e35c766aea904898f2e9f2ffaca","Type":"ContainerStarted","Data":"0a662b88d01e2a6c7840550eedccdbaad4f0955066a41fc813a25bc7970213e5"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275068 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"72ee9e35c766aea904898f2e9f2ffaca","Type":"ContainerStarted","Data":"bd383c7f3493b77aa39a71f0c59c6ca2af1cb84a3dcd17da7deffd0c9f13279e"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275085 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"72ee9e35c766aea904898f2e9f2ffaca","Type":"ContainerStarted","Data":"93e4248b433133e3c151d7b3b51df468e545cf503f72fd69fa418801f9123776"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275110 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"72ee9e35c766aea904898f2e9f2ffaca","Type":"ContainerStarted","Data":"bdfde90f893f521a930ff809d7a19e8600359a70b3e19bbbef0735c23b65d26d"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275127 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"72ee9e35c766aea904898f2e9f2ffaca","Type":"ContainerStarted","Data":"18445cef4b6797ad657a965be9f13f99564dcc29dc7e932a9b359ffe1a1aa1ce"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275143 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn2nh" event={"ID":"f275e79f-923c-4d3a-8ed4-084a122ddcf4","Type":"ContainerStarted","Data":"174ef56d9a0731e870098210dbe7db94e7668c7396c38469aae8bfc88af93da5"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275161 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn2nh" event={"ID":"f275e79f-923c-4d3a-8ed4-084a122ddcf4","Type":"ContainerDied","Data":"8e09cadaa280b2142d1e553cf5915c3779b8daaeed82dcb8adbf18accee60298"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275178 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn2nh" event={"ID":"f275e79f-923c-4d3a-8ed4-084a122ddcf4","Type":"ContainerDied","Data":"a976e4b82843842a71c3126eb2ebdd642e517cc73242b40b185d375d47043cde"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275195 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sn2nh" event={"ID":"f275e79f-923c-4d3a-8ed4-084a122ddcf4","Type":"ContainerStarted","Data":"8e70ffdd495dcdb270b1f5bf74d98194840c0bb5429461a2cbed334f4538aeec"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275218 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" event={"ID":"2506c282-0b37-4ece-8a0c-885d0b7f7901","Type":"ContainerStarted","Data":"9f90d50c443b02c7e534aaa4189343a67e0f379619e2d5c07740a2f0b49e9999"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275235 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" event={"ID":"2506c282-0b37-4ece-8a0c-885d0b7f7901","Type":"ContainerDied","Data":"c78e5502c7df20a63c6e359691ad6478f7f26c7822d2c31d3780654e26b107fb"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275251 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" event={"ID":"2506c282-0b37-4ece-8a0c-885d0b7f7901","Type":"ContainerStarted","Data":"0c4934055dbc002aad718ae831c2d636c9e3bd49545da85cae7eace9dea452ac"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275266 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerStarted","Data":"9a4fbebd80c93d723f4b6793cf7b0ccb622b9b9b4616c52e1479c9e9afb211d0"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275282 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerDied","Data":"71d2f873a3383c5d4e4ea361c9b4723201e4600cb1f7ea3ef5cecd7778b39d86"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275299 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" event={"ID":"0b02b740-5698-4e9a-90fe-2873bd0b0958","Type":"ContainerStarted","Data":"6d07de2e0be321a3aec4da12f4f04e483d7ebf0407264e8a59f6674bcacef82d"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275315 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" event={"ID":"319dc882-e1f5-40f9-99f4-2bae028337e5","Type":"ContainerStarted","Data":"b014fde00d656d88f73bc5afec71e6ac7dc4f1b7fdabe71571471749b0f80f22"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275339 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" event={"ID":"319dc882-e1f5-40f9-99f4-2bae028337e5","Type":"ContainerDied","Data":"203b091a662b4912838a798e07794a8caa755508028a6b4fa5f1ef8b83de89af"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275356 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" event={"ID":"319dc882-e1f5-40f9-99f4-2bae028337e5","Type":"ContainerStarted","Data":"a5c8e6b51575e43d26e0817313f1ec460f29cff6ceb6629a7a5e2f186f585513"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275373 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" event={"ID":"6b6be6de-6fcc-4f57-b163-fe8f970a01a4","Type":"ContainerStarted","Data":"c2a09a3b4592efd5c3950579bb4aaa5d970beb72eb354639340f9f2327450863"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275391 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" event={"ID":"6b6be6de-6fcc-4f57-b163-fe8f970a01a4","Type":"ContainerDied","Data":"d0e5f8a907c4851af3bce655e141083b0f633fdfa41c5abacbb48a7df33f9e94"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275408 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" event={"ID":"6b6be6de-6fcc-4f57-b163-fe8f970a01a4","Type":"ContainerStarted","Data":"75ca3e4fc5da353a0ea31c674632f3429b17eb41f067d771200d9b0aea75af5d"} Feb 16 21:23:11.275344 master-0 kubenswrapper[38936]: I0216 21:23:11.275424 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"3ebe9b7d8ce03b2c6ab5c8d3215470f47595c89ae74952d5865ce15e1874a8ee"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275447 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"d608a5d9652a3c6ba32e1dcd56710fee04c37ee22144db45ecd5fe5c524c9a31"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275463 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"6435ebb5f02081a9ce4ce936a293eb7bb3bd2de40c50e78a8a1e337141307f75"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275479 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"432794b20c117ef5563701790110e26447eca7921c053c44497fb8bd396c6901"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275497 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"401dbdafe44d87ba9ccf2adf090a2c537b4f84058eb049f0f6795c6752a1a8d0"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275514 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" event={"ID":"b28234d1-1d9a-4d9f-9ad1-e3c682bed492","Type":"ContainerStarted","Data":"ad82b639a997ed0e5d8b2861e9f7c244d5b1a24c830d1de71432866846084c10"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275530 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" event={"ID":"b28234d1-1d9a-4d9f-9ad1-e3c682bed492","Type":"ContainerDied","Data":"4255d701755ee16eefc4f64ff2a1d87789d35c023038a0daf9f7cd0b69fb26a7"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275553 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" event={"ID":"b28234d1-1d9a-4d9f-9ad1-e3c682bed492","Type":"ContainerStarted","Data":"07e2ee4df3da5cd46dd10fb4afd51a212c46737743b9be4c1d162a76d568a6fd"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275569 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965","Type":"ContainerDied","Data":"5f4f1f7bf4711de84107b1c6040a91b2b71847aa5f151a70149a5a43fdbb16fc"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275587 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"9ecf0a0a-f55d-47f0-9fcd-6a53edf2e965","Type":"ContainerDied","Data":"b0c2e1a17593c2d9cad62fca4b76d1bcb53b42211c4063cb3d0e8c42005672a2"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275602 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0c2e1a17593c2d9cad62fca4b76d1bcb53b42211c4063cb3d0e8c42005672a2" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275624 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76dbaddee4470107b39590128f61476392182af8f7359d5ef8d2efc6c99ae59e" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275637 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5b26dae9694224e04f0cdc3841408c63","Type":"ContainerStarted","Data":"1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275674 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5b26dae9694224e04f0cdc3841408c63","Type":"ContainerStarted","Data":"6484af368276a809cf9fc113e39e94b58a7e749f404b7ad55bc0ffd6db6821c5"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275702 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" event={"ID":"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5","Type":"ContainerStarted","Data":"3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275722 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" event={"ID":"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5","Type":"ContainerStarted","Data":"95bb21eb958017bb1c79698309b67c3682dcd7011e9d5aacdb4e7366e93203b8"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275737 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" event={"ID":"302156cc-9dca-4a66-9e6a-ba2c7e738c92","Type":"ContainerStarted","Data":"f78d754f1df309b0cad8a0e20f5eb08891911c8e6d19e1d3fa298a8f6933a83c"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275777 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" event={"ID":"302156cc-9dca-4a66-9e6a-ba2c7e738c92","Type":"ContainerDied","Data":"cf5bd07d44ef1049857af620840ed7780e94db377ae50a689034fcd0589dd325"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275796 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" event={"ID":"302156cc-9dca-4a66-9e6a-ba2c7e738c92","Type":"ContainerStarted","Data":"d2b7935cea946c9f051bb808d0bcec166c533127cc006510308f2ece80cabd7f"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275813 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerStarted","Data":"dbf27d3e8d5c7c62e35cbb6423a4806befc25edd2d78c5f0092f98b1bff2b619"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275831 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerDied","Data":"473abb156ae2a59c96465c39d4a668c4215a0ddadc4067a2a5c3edc0e671f3a6"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275856 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" event={"ID":"b1ac9776-54c4-46ce-b898-01c8cf35e593","Type":"ContainerStarted","Data":"d3647391d6c6aea748cff19ab3829b4c4308cc4ee2ef9a5eb37149acfef03e2f"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275876 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65zz6" event={"ID":"b27e0202-8bdb-4a36-8c3e-0c203f7665b8","Type":"ContainerStarted","Data":"911511d61b149b2a70f165a79454e8a52d97f53e4b9bed2f57b34efa4fd727a0"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275893 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-65zz6" event={"ID":"b27e0202-8bdb-4a36-8c3e-0c203f7665b8","Type":"ContainerStarted","Data":"c8c3670530b0c671383aade45325850e12f9fcf9f76178c2929f043d5a9b72a3"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275909 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-42bw7" event={"ID":"1d453639-52ed-4a14-a2ee-02cf9acc2f7c","Type":"ContainerStarted","Data":"8bdc75ad4a8097f8c772c54e1b21d47936cb39929b68bda0391b951d52990de1"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275926 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-42bw7" event={"ID":"1d453639-52ed-4a14-a2ee-02cf9acc2f7c","Type":"ContainerStarted","Data":"b92936634fddc60909dc2fadd6f7f16c08dc7c7fd8fa03f673db3212a3c8c3fa"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275942 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-42bw7" event={"ID":"1d453639-52ed-4a14-a2ee-02cf9acc2f7c","Type":"ContainerStarted","Data":"0048dbcae18fdbd149a49da2679d70bbb9de5e907689064aaea0ab32348a1024"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275964 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerStarted","Data":"8fc2cca192f72b63cdb1729b01edf727b51348c41be1bedd0f2a185d025ba61f"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275982 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerDied","Data":"aab44606d671f216ff3793ef915c84f815301082904e4bc4a12b70d23d7c13c3"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276000 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" event={"ID":"1b61063e-775e-421d-bf73-a6ef134293a0","Type":"ContainerStarted","Data":"957c111d10e2d292281a50f8cc278f441c1f3165b491de07cd91b63ab4d96530"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276071 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ctvb2" event={"ID":"7d6eb694-9a3d-49d1-bbc1-74ba4450d673","Type":"ContainerStarted","Data":"3fa85c5bdf337a4669f23966505c1f564020ce2b287a6714bc11d7cbcb4be1af"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276088 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ctvb2" event={"ID":"7d6eb694-9a3d-49d1-bbc1-74ba4450d673","Type":"ContainerStarted","Data":"6f6509f6290e5127bfe082132c0bf6a45571e4de7a324345b01c47d3586455c4"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276105 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ctvb2" event={"ID":"7d6eb694-9a3d-49d1-bbc1-74ba4450d673","Type":"ContainerDied","Data":"35aeddbd3b02ea16608fbe6dfea1fa7dc35fe8b876f2fa1fba3cfd614e5815c0"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276179 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-ctvb2" event={"ID":"7d6eb694-9a3d-49d1-bbc1-74ba4450d673","Type":"ContainerStarted","Data":"aed3d22aa5c102de3c056d7b1148ad38dc8f06e42bff2232e153f1a44338819c"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.273228 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276301 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-6llwf" event={"ID":"700bc24c-4b00-44f0-90b0-aa555fe5c7a8","Type":"ContainerDied","Data":"fa302e5e493b2dfa58bae20f0ca7e4cc187d6d95bf769b99faf796dd889e114f"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276337 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-6llwf" event={"ID":"700bc24c-4b00-44f0-90b0-aa555fe5c7a8","Type":"ContainerDied","Data":"2e5b179a0033062cd2b178034bcb5784ab1edcaef771f5cac5fd7b9ba67359d1"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276360 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5b179a0033062cd2b178034bcb5784ab1edcaef771f5cac5fd7b9ba67359d1" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276383 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" event={"ID":"ff193060-a272-4e4e-990a-83ac410f523d","Type":"ContainerStarted","Data":"ef43fbfc945aa678d642581bba1ac8119a0675069fc72b0537960c8e21934061"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276407 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" event={"ID":"ff193060-a272-4e4e-990a-83ac410f523d","Type":"ContainerStarted","Data":"9bb864e89f3ac9ffa49c4c67ddca01cba021221f4cf7bc201c305a5969704be4"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.273281 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276473 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" event={"ID":"ff193060-a272-4e4e-990a-83ac410f523d","Type":"ContainerDied","Data":"f5d1b2f95d0f407ab1fdd5eb9fe9deae1b8e8d536d017cfe9a03861815d4f96a"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276516 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.273509 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.275709 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276709 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" event={"ID":"ff193060-a272-4e4e-990a-83ac410f523d","Type":"ContainerStarted","Data":"3edd59cb6b6314e671425a245027b79b2d561376466e447c62b29ac14f08bcff"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276750 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"127d340a22fe8099cebc2264bacf3eeab221a7653bb8d4c8d30630cf81318a3f"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276765 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerDied","Data":"6a46714853e2a885d7f0ea06667526f3f7b240b0bd635da8d5cae43fd1dadc87"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276776 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"9c3555dd069a7df80fae789b4b23ce84596b7c133210eeb7b11b618ce5d733b4"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276789 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" event={"ID":"8b648d9e-a892-4951-b0e2-fed6b16273d4","Type":"ContainerStarted","Data":"74e6be5033443384ea4bd5754c8e506826ab77e1e025ae4e7b5a3735350d70f2"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276892 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerStarted","Data":"75c97c8fc1fe4bc7ed998eb0ff8eb423dc36feffc10982a1abea2a451f308726"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276910 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerDied","Data":"b805375f7b42f31b0863c18246ff6bd98c4c77aa1ad1eb2b469a42772d48301d"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.276920 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerDied","Data":"9949cb3f0ffb40ac03674e827a655fd8962fd631e7432c2ead34043e0e4d8864"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.279357 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerDied","Data":"30eb3e8a1a561e4df2b728e0e98a6145e2dd7a64784f0071e688e9e9f5cc6bbc"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.279612 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" event={"ID":"5e062e07-8076-444c-b476-4eb2848e9613","Type":"ContainerStarted","Data":"1e734464d78209c21a7a9eb2f6d22c8584997def010318f287f0cb7c28b7390b"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.279624 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" event={"ID":"4cc1da27-6eaf-4177-b2d8-1546a9d94f90","Type":"ContainerDied","Data":"b5c9ef27352d95c27da1fd4de0d350f8371e4f69cc5b84960004238d748e1ab6"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.279635 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d" event={"ID":"4cc1da27-6eaf-4177-b2d8-1546a9d94f90","Type":"ContainerDied","Data":"8e8b059d73a2c8e5ffd1f224f2251f2554ce00c13864a77bc4bd0d65d3713e02"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.279665 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8b059d73a2c8e5ffd1f224f2251f2554ce00c13864a77bc4bd0d65d3713e02" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.279676 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerStarted","Data":"302bf12f6109c01eb273603d5fb2413e60f821dd662712bbc7e00c4eafc2b54f"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280036 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerDied","Data":"0715c2c6bc16d3adc1361563ad51b4de11f77937d1f51eb61f3cd34b96856d0c"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280176 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerDied","Data":"61defc533791601dd8ff505e57b675aac367c1fe0144fefa77509ab84c3b3331"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280238 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" event={"ID":"59237aa6-6250-4619-8ee5-abae59f04b57","Type":"ContainerStarted","Data":"4f2c49b4aa155e075775a0da6ce790eafb2a3d3e88c9dbca188493bbec98d810"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280261 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" event={"ID":"a4c9b781-14c0-469c-bb9e-0c3982a04520","Type":"ContainerStarted","Data":"6040ea1798d8d929c837e96747106c868fc9107367ded8384ee5318d0125dfe3"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280281 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" event={"ID":"a4c9b781-14c0-469c-bb9e-0c3982a04520","Type":"ContainerStarted","Data":"27e39bf106b6e002c0125d685214889286fc25d34ba09141b24632bec0751f4d"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280300 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"b09d3c16-18e3-45b3-9d39-949d2464b300","Type":"ContainerDied","Data":"ab3f1bdaa87534b4aa1ea4a058dea3457c695cfe1da23ed41ae2ee089315bd08"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280319 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"b09d3c16-18e3-45b3-9d39-949d2464b300","Type":"ContainerDied","Data":"a1a7ba08e2cc5089762afc7ce295fbadf271a58f2006a34cf3be8f3b16ca4e70"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280346 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a7ba08e2cc5089762afc7ce295fbadf271a58f2006a34cf3be8f3b16ca4e70" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280364 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" event={"ID":"bd49e653-3b42-4950-8f5f-2b2ecb683678","Type":"ContainerStarted","Data":"f3fdacbd5a024a974deabef99786f889a735274aa45efb3c455cc2939dd440eb"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280379 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" event={"ID":"bd49e653-3b42-4950-8f5f-2b2ecb683678","Type":"ContainerDied","Data":"68de2e1ab2cad0885d92d9f27ce9e9ae8699ab2a4e1f40736fffa8de720860f7"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280395 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" event={"ID":"bd49e653-3b42-4950-8f5f-2b2ecb683678","Type":"ContainerStarted","Data":"02b45fb8e619cea5ccaf6f782fba75e7a7903a3e4348fde89b8d1bc48406b6c9"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280419 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" event={"ID":"1d7d0416-5f50-42bd-826b-92eecf9adcec","Type":"ContainerStarted","Data":"62b487940e9059c7edfccc46f4b46f6733b0bfea4f437b53500d0c8a0ca74fd9"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280442 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" event={"ID":"1d7d0416-5f50-42bd-826b-92eecf9adcec","Type":"ContainerDied","Data":"2805492f11ff17f7e51a6fba30471dee89ec93e40bd6ce6db4b158be70c75964"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280463 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" event={"ID":"1d7d0416-5f50-42bd-826b-92eecf9adcec","Type":"ContainerStarted","Data":"292b6b8cf180e68ad44412d08d309be8106bcaf05b10681c44231906c9b5f8fa"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280482 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" event={"ID":"1d7d0416-5f50-42bd-826b-92eecf9adcec","Type":"ContainerStarted","Data":"1ff8802ad134d499fee700156b80ec71b617c31ecfda4162eeae2f5521b198f8"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280496 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" event={"ID":"1489d1b6-d8a1-453a-bff3-8adfd4335903","Type":"ContainerStarted","Data":"25ee620a91a11cdfcf10f317458e9833777a7250c9af0cd0962ed366c5d07a92"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280511 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" event={"ID":"1489d1b6-d8a1-453a-bff3-8adfd4335903","Type":"ContainerStarted","Data":"d3122711a170f449cbae155070984deb894c3febeb5926b33f03b31158614e34"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280527 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" event={"ID":"4a9f4f96-ca31-4959-93fe-c094caf8e077","Type":"ContainerStarted","Data":"717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280540 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" event={"ID":"4a9f4f96-ca31-4959-93fe-c094caf8e077","Type":"ContainerStarted","Data":"b4ab6f7d6521695677ac09385923bea0cfde2c320361c5f6cbe98ce64b7475b2"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280553 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" event={"ID":"484154d0-66c8-4d0e-bf1b-f48d0abfe628","Type":"ContainerStarted","Data":"51a19c0d4f3c8ae263edbdd5efb421daa153d0d3395961b41e2e334207be4195"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280574 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" event={"ID":"484154d0-66c8-4d0e-bf1b-f48d0abfe628","Type":"ContainerDied","Data":"784108aeefea86df821b8787cc4aa96e0a0d0b443e8ed52de36e36ad7f22bb5e"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280589 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" event={"ID":"484154d0-66c8-4d0e-bf1b-f48d0abfe628","Type":"ContainerStarted","Data":"886e279fd9c1934388e680cd4a0350ba2f292d514ac9e97bbae0f912d11a2b10"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280601 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" event={"ID":"484154d0-66c8-4d0e-bf1b-f48d0abfe628","Type":"ContainerStarted","Data":"ebc8d1a24100c636c9029b0eba8d5b6521b906cdbb84675057a80b42a0273bbc"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280614 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"9ea7eb4c5b7177a7e2ac3c5dca26fbf5f811d30a8d29e8b826572146fe10d264"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280628 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"34cedb032f29de87a57c244cfdac89c6368a83bd489ea19dfd7e57624682d8a7"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280644 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"1c9bfe3aaee57fe250198f3484327052043637146bacc2e7c8dfb22afd3d4c6c"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280720 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"f04bc2a9a7b0a2ad7783338e4d002aabfd3d03dc3ab93d584acf59a1f159b65a"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280740 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l44qd" event={"ID":"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b","Type":"ContainerStarted","Data":"8d552db0837fc540893f8ec713b54b574ad04cadc36ab9823266c8e56b9e7a86"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280754 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l44qd" event={"ID":"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b","Type":"ContainerStarted","Data":"9b7b734a04c19ca82d24b6113d7260320b0a9c95bbc6375cd7e4100f7054eb3f"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280767 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" event={"ID":"55095f4f-cac0-456c-9ccc-45869392408c","Type":"ContainerStarted","Data":"d226aa39dd648190d8ac3bff9e2c7d5ebce52835f391db09e2359a199061478a"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.280867 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" event={"ID":"55095f4f-cac0-456c-9ccc-45869392408c","Type":"ContainerStarted","Data":"bed0c408affb572fccef4fee0aeb682072b214b567b0eac51edbbb5af21c22d5"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.281877 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" event={"ID":"55095f4f-cac0-456c-9ccc-45869392408c","Type":"ContainerStarted","Data":"846c42631e11b31d77d6f927ca22e80b7cd7d920231f1d2b9f1cfa12101d157e"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.281927 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" event={"ID":"8d56b871-a53a-4928-8967-a33ea9dcec2a","Type":"ContainerStarted","Data":"7d4587438925e95ef133aa70ffd5cc5c95285a91547249dafb4e5e010a318487"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.281944 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" event={"ID":"8d56b871-a53a-4928-8967-a33ea9dcec2a","Type":"ContainerStarted","Data":"095da5d3f3a8d574558c5e1ced05aba1aaa62dc2ea675395d13a40ca2c30a60c"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.281956 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" event={"ID":"8d56b871-a53a-4928-8967-a33ea9dcec2a","Type":"ContainerStarted","Data":"017b12ba663cae17ffc7b3e8cac380511c7277e4c495d7f5a091fa50febd2724"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.281970 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" event={"ID":"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b","Type":"ContainerStarted","Data":"cfd3ff2ce35aabfb3b796de6fbfb52e6ac44fbba7a139e8b846a35594c70ba5c"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.281986 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" event={"ID":"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b","Type":"ContainerDied","Data":"073bfd97b3802cf7e422558b7f0d96ac1c7a887d6a785fb5000fa99850a0b06e"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.281999 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" event={"ID":"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b","Type":"ContainerStarted","Data":"3ca24ad1f8d41b0227373cdca70f4d0ead865f343ffe91de92638dd9fb5c6f20"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282013 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" event={"ID":"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b","Type":"ContainerStarted","Data":"33442d22098554ef2512c5bbab1d4a284aed4856345ee1eb8654ba065012ab94"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282025 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blw8x" event={"ID":"853452fb-1035-4f22-8aeb-9043d150e8ca","Type":"ContainerStarted","Data":"a8ce4d1d9c38bbcf9596ec468f2d5d035d849fec8079c99788efd1e0bbd3eacd"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282039 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blw8x" event={"ID":"853452fb-1035-4f22-8aeb-9043d150e8ca","Type":"ContainerDied","Data":"ffbe844a2ffc7eee14e6cfe4f85b6f3a2d4632e0cd257a400a32c1667a3dc025"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282053 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blw8x" event={"ID":"853452fb-1035-4f22-8aeb-9043d150e8ca","Type":"ContainerDied","Data":"8b2a92ef4f9f721811b4bae1b0d025f01e55ec1f259a078142245e8b2ab55dd5"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282067 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blw8x" event={"ID":"853452fb-1035-4f22-8aeb-9043d150e8ca","Type":"ContainerStarted","Data":"89fb595810896fd574764c1b2babfd4babc84a77caf787d5018047df10f3ac86"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282081 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" event={"ID":"d2501eec-47c8-47bc-b0c9-28d94c06075b","Type":"ContainerStarted","Data":"0856f0c22b60435b85fb84c2015179efe4d4434ed38a1e900790fcd7531c6189"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282094 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" event={"ID":"d2501eec-47c8-47bc-b0c9-28d94c06075b","Type":"ContainerStarted","Data":"4e6698adeb0259c7abcd8ca7be9fcd53fc2f448ac8a7d94023fba495185a15f8"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282101 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282106 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" event={"ID":"d2501eec-47c8-47bc-b0c9-28d94c06075b","Type":"ContainerDied","Data":"fac6599aca0de28d90bc133433b080122ce047275bd07a83287cf6be8f57463e"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282217 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" event={"ID":"d2501eec-47c8-47bc-b0c9-28d94c06075b","Type":"ContainerStarted","Data":"db0925be9adc52361772ef921815ff9b0ca5417617347a7d9e8f0049e699014a"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282232 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" event={"ID":"408a9364-3730-4017-b1e4-c85d6a504168","Type":"ContainerStarted","Data":"998c9ae589b8ae43e110fa0bf1929dd53f4179a605ee219bd9e74970ce1b2465"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282245 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" event={"ID":"408a9364-3730-4017-b1e4-c85d6a504168","Type":"ContainerDied","Data":"ec8ce2b77f9d3d1712f1d9e5d59ca2196200eb54635d01b0d1caf94494809751"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282258 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" event={"ID":"408a9364-3730-4017-b1e4-c85d6a504168","Type":"ContainerStarted","Data":"f6ba9fbde2ec0f2099ab53176d9410c4bf53a78507ca46eeb7e91c2f36c118ed"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282271 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" event={"ID":"b27de289-c0f9-47ff-aac6-15b7bc1b178a","Type":"ContainerDied","Data":"7e2db6d71a3ac7629c39a027759be84deb42e9801284908e0ecc941bc1381254"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282291 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" event={"ID":"b27de289-c0f9-47ff-aac6-15b7bc1b178a","Type":"ContainerStarted","Data":"b6f9bd149e55332060a93dd1c773c869219679c9d52274540dd91f495e731934"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282303 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" event={"ID":"b27de289-c0f9-47ff-aac6-15b7bc1b178a","Type":"ContainerStarted","Data":"7836160a631ad4fabd13fade7e117d0a195ed40a8c1f33bde283fef44ab0f21f"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282314 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" event={"ID":"da07cd48-b1e8-4ccc-b980-84702cedb042","Type":"ContainerStarted","Data":"3f85217164f33ae361d727e56edd219159b638f9f5baaf529b0f66b008d3e74b"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282328 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" event={"ID":"da07cd48-b1e8-4ccc-b980-84702cedb042","Type":"ContainerStarted","Data":"1befa239880012918c5014596ebf2ea1e19a17105f1c62212a86bd3326b1986f"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282341 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" event={"ID":"e9bd1f48-6d45-4045-b18e-46ce3005d51d","Type":"ContainerStarted","Data":"14a257c4d30feb322bf947d285b2761bc04202993600aeef5d6a83b601417e29"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282354 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" event={"ID":"e9bd1f48-6d45-4045-b18e-46ce3005d51d","Type":"ContainerStarted","Data":"418ed93e2d97b302c27aa5bd16b20d2ee3b92954aa28e01a918f46e4ccd79241"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282368 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" event={"ID":"e9bd1f48-6d45-4045-b18e-46ce3005d51d","Type":"ContainerStarted","Data":"ae4b728d26d2235e9c2481e97c712ffb552d7c0d29beb5a7141bb97993e8cb8c"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282379 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" event={"ID":"e9bd1f48-6d45-4045-b18e-46ce3005d51d","Type":"ContainerStarted","Data":"cb99eaa7ceffb734068bb188738c361f8400867f02f0acef09f3dcc317540b0e"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282392 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" event={"ID":"a5d4ac48-aed3-46b9-9b2a-d741121e05b4","Type":"ContainerStarted","Data":"6343280e0df4085e2272811bcce84fa21c423071562a8310728970f3dd76b136"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282405 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" event={"ID":"a5d4ac48-aed3-46b9-9b2a-d741121e05b4","Type":"ContainerDied","Data":"22be26c79a1d2adc3db5f6e113ba92cfcf47f9a286ce35fb6273d18f0ea1545e"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282418 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" event={"ID":"a5d4ac48-aed3-46b9-9b2a-d741121e05b4","Type":"ContainerStarted","Data":"b1c5e0970049830739dbde889218d9f83f1d9720ddba4de32c1b5bd6626ed51d"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282433 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" event={"ID":"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3","Type":"ContainerStarted","Data":"5d2c22738802536774d55c1e4c6c8ed59ce5c575ebb78dadfcb7c71eb7f34d22"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282446 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" event={"ID":"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3","Type":"ContainerDied","Data":"11a0f236b15a97d8bb8db30a3ecfba40559eb738b2fbad78fcc9824a0ec8620e"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282461 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" event={"ID":"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3","Type":"ContainerStarted","Data":"c4765e33cdc956d84e8349da9b28a001d07fad6c39b6a113416bb9d1d1ae88dd"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282474 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" event={"ID":"e8194cdc-3133-49e2-9579-a747c0bf2b16","Type":"ContainerStarted","Data":"e04872ea2c764c93d171f84352e60786a5be1d211e2a3194644c313a82c96c0c"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282488 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" event={"ID":"e8194cdc-3133-49e2-9579-a747c0bf2b16","Type":"ContainerDied","Data":"4f5444c17822db01691b9d03f3dd6a819e814eea7a63f23ec45ece42ea5fba62"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282503 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" event={"ID":"e8194cdc-3133-49e2-9579-a747c0bf2b16","Type":"ContainerStarted","Data":"22968a9882928f70bec5424cc2346763d1decd6df62181dc2fb45946d7faa2c0"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282516 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" event={"ID":"e8194cdc-3133-49e2-9579-a747c0bf2b16","Type":"ContainerStarted","Data":"c6c5fc997a3d90f0f136390ca95bcbc1e110994ac3cdfcc2e3e8e90f78ca1dd9"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282529 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5kwc" event={"ID":"ce229d27-837d-4a98-80fc-d56877ae39b8","Type":"ContainerStarted","Data":"c1f8fde78fdcda9989a4c5f1c082c78ebc7c4aa51b02befd18293e11e9bd341a"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282542 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5kwc" event={"ID":"ce229d27-837d-4a98-80fc-d56877ae39b8","Type":"ContainerDied","Data":"88247333b19116719c02e3337d53469a84d7c4cf04c7843a9226ea683ea58eef"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282556 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5kwc" event={"ID":"ce229d27-837d-4a98-80fc-d56877ae39b8","Type":"ContainerDied","Data":"4417baf2be8cb2785a3116c10e495e124305a7b9a9021ca81984fe0912c3ccfa"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282567 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5kwc" event={"ID":"ce229d27-837d-4a98-80fc-d56877ae39b8","Type":"ContainerStarted","Data":"03ed4454e9c6237b864a1dab6c209256c79b0a72cb535e51a70e7b99d3f0689e"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282580 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerStarted","Data":"7eb9d606c0ba4432a3c104c5bb2952f3efa3dee4e29f1c0d81a5b0db607ceac8"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282598 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerDied","Data":"2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282623 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerStarted","Data":"1d4599582332a100db8555ba006867716892ce1ecdd5b2f904cbee81575c2c2d"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282640 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" event={"ID":"4085413c-9af1-4d2a-ba0f-33b42025cb7f","Type":"ContainerStarted","Data":"a7f8b5655aa5f928db7106989ad4301d85bb293edb63d14ebb1059dcd9ca8910"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282673 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" event={"ID":"4085413c-9af1-4d2a-ba0f-33b42025cb7f","Type":"ContainerDied","Data":"5bb447e9b562fe2a3fcb45b723cffb38257ea64157f142954fe58414909efdd3"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282687 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" event={"ID":"4085413c-9af1-4d2a-ba0f-33b42025cb7f","Type":"ContainerStarted","Data":"c073f224d2a8cc60c80044d595d19260d941f19b426f78dc52e84033ff1afedc"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282690 38936 scope.go:117] "RemoveContainer" containerID="2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8" Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282700 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerStarted","Data":"f002cef497bb8bbd088c37fab5b84fc213593b368b6c57fc1b2ebfc210f79c29"} Feb 16 21:23:11.282679 master-0 kubenswrapper[38936]: I0216 21:23:11.282821 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"0213e2c5badfad1c445275191896cc5e9028427f3090c086deb48f44170a8559"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.282837 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"c4606e99d38ef423f540d128546208027e050c83b7e8385117d1ac9efe8a49dd"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283355 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"4c7a7e08f576cfd5e11632a9ba0076da03fa44265bff3bddab5c897154cfdd10"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283386 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"181fe628d311f1cd1061bd5a4ed240a9f0bc9297d01fb093f8d0beb40911a4e0"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283403 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"764147f0ae46dce8cfdba6d43c9720c0e223cc03d6732303325fb33cc0d7abd0"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283419 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerDied","Data":"2485cbe452aed6f7043c33dccc17caa48675a3e464f4b79370075f51c4973793"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283536 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8zsx4" event={"ID":"62935559-041f-4694-9d36-adc809d079b4","Type":"ContainerStarted","Data":"0dfbee9f7528fe042540e180164336ecf2ece621fbebd18d9dde03c5a49a8d3a"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283591 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"e441fffa8b12dc73c314b5893d29a697010cb53854ce90d32eb7b68a2f5ca29e"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283614 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"863a190d51b525e4103773cf5a7867cf67cf97e7a4a1ede81363f11e4c1dd6b7"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283629 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"d8eac33db7a92bab03def14450dd1750a954d1d9b9cc124c7deead003bb6996a"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283643 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"8fc7fdd0d480b1fd68681ee30d8785c154cbf24f0c4e8319840eb7818ec82950"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283682 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"f05e9f801d429c919b941187b2782d4308239d42ccb37b0311a3c95f1e719297"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283701 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"30c3311ac2594f90ee07f133990bc2e498e9439d4db71f3e17a8742c175c7b4f"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283720 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"3478db789e9371b7e1a20de102750814fbff190dbf9776351e2f462d389fbe58"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283939 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"c4633b0b299cd40e037bf321ae06c8806fedc4001bb393b919fc921dc3fe2902"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283959 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"611833cac10a2c7b92f524745bb3d40c37badfe83dfcc13e97aefe053823dfb9"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283971 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" event={"ID":"e9615af2-cad5-4705-9c2f-6f3c97026100","Type":"ContainerStarted","Data":"731bee714e1ed342758024ac0402e898ea440d14a35645160149416223c075e2"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283984 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" event={"ID":"e9615af2-cad5-4705-9c2f-6f3c97026100","Type":"ContainerDied","Data":"43a48a6592fa00c02a3165bc38965569bd23dac45b30b2fdc517303872a72e62"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.283999 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" event={"ID":"e9615af2-cad5-4705-9c2f-6f3c97026100","Type":"ContainerStarted","Data":"db8564acd67a0d7a69c00ddf2a89b541dc8e61594341a8f533db80c14da1c414"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.284016 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" event={"ID":"065fcd43-1572-4152-b77b-a6b7ab52a081","Type":"ContainerStarted","Data":"6fa5335e554ef3afb4d68268a5f6f2e23524b3ac6a1926bda3c2a121662cce25"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.284034 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" event={"ID":"065fcd43-1572-4152-b77b-a6b7ab52a081","Type":"ContainerDied","Data":"577a19cb609733c40b24d16a4cfb15f4698079667a2b3110eeef59cec7643dff"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.284053 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" event={"ID":"065fcd43-1572-4152-b77b-a6b7ab52a081","Type":"ContainerStarted","Data":"09791bd713ecaeccf489060fc2fec30269d2977979f66329e6c0231f6abbbe33"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.284101 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" event={"ID":"065fcd43-1572-4152-b77b-a6b7ab52a081","Type":"ContainerStarted","Data":"b9312957dc15df5de566304a0d01d6c55a3f6333b95b61734ba1c6f29131877b"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.284116 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerDied","Data":"43047bae0f2dd351891e082f8932168325d435e7cb25fa3bae528c469bde358f"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.284130 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" event={"ID":"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5","Type":"ContainerDied","Data":"3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.284145 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" event={"ID":"b27de289-c0f9-47ff-aac6-15b7bc1b178a","Type":"ContainerDied","Data":"b6f9bd149e55332060a93dd1c773c869219679c9d52274540dd91f495e731934"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.284158 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerDied","Data":"7eb9d606c0ba4432a3c104c5bb2952f3efa3dee4e29f1c0d81a5b0db607ceac8"} Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287091 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-encryption-config\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287137 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287184 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-systemd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287287 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287312 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-hostroot\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287351 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287377 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-config\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287404 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-os-release\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287423 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287538 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287428 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7adbe32-b8b9-438e-a2e3-f93146a97424-config\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287639 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjsnz\" (UniqueName: \"kubernetes.io/projected/27c20f63-9bfb-4703-94d5-0c65475e08d1-kube-api-access-hjsnz\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287696 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287745 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-socket-dir-parent\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287765 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmvtk\" (UniqueName: \"kubernetes.io/projected/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-kube-api-access-zmvtk\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287800 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysconfig\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287822 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287854 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7adbe32-b8b9-438e-a2e3-f93146a97424-config\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287876 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287921 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-netd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.287976 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-config\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288055 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf4qg\" (UniqueName: \"kubernetes.io/projected/bd49e653-3b42-4950-8f5f-2b2ecb683678-kube-api-access-kf4qg\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288355 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/695549c8-d1fc-429d-9c9f-0a5915dc6074-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288441 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288528 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-profile-collector-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288610 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288699 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288775 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-client\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288805 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-service-ca\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288810 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-profile-collector-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288861 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-image-import-ca\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288889 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqm46\" (UniqueName: \"kubernetes.io/projected/69785167-b4ae-415b-bdcb-029f62effe78-kube-api-access-dqm46\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288939 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-serving-ca\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.288970 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289056 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab0a907-7abe-4808-ba21-bdda1506eae2-serving-cert\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289059 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/695549c8-d1fc-429d-9c9f-0a5915dc6074-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289129 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69785167-b4ae-415b-bdcb-029f62effe78-ovn-node-metrics-cert\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289159 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289208 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a4c9b781-14c0-469c-bb9e-0c3982a04520-srv-cert\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289215 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289248 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-whereabouts-configmap\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289391 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289415 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-var-lib-kubelet\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289411 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ab0a907-7abe-4808-ba21-bdda1506eae2-serving-cert\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289435 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-encryption-config\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289454 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/302156cc-9dca-4a66-9e6a-ba2c7e738c92-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-8pqbl\" (UID: \"302156cc-9dca-4a66-9e6a-ba2c7e738c92\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289476 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-systemd-units\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289493 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw9lp\" (UniqueName: \"kubernetes.io/projected/4085413c-9af1-4d2a-ba0f-33b42025cb7f-kube-api-access-dw9lp\") pod \"csi-snapshot-controller-operator-7b87b97578-v7xdv\" (UID: \"4085413c-9af1-4d2a-ba0f-33b42025cb7f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289512 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxcg6\" (UniqueName: \"kubernetes.io/projected/302156cc-9dca-4a66-9e6a-ba2c7e738c92-kube-api-access-zxcg6\") pod \"control-plane-machine-set-operator-d8bf84b88-8pqbl\" (UID: \"302156cc-9dca-4a66-9e6a-ba2c7e738c92\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289530 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-run\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289548 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-tuned\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289563 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289581 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7adbe32-b8b9-438e-a2e3-f93146a97424-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289635 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-bin\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289510 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-whereabouts-configmap\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289694 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289704 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec7dd4ea-a139-45d4-96a4-506da1567292-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289801 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-cabundle\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289825 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-serving-cert\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289844 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289875 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-profile-collector-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289898 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-tuned\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289924 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-ovn\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.289982 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-node-log\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290030 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4rg\" (UniqueName: \"kubernetes.io/projected/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-kube-api-access-ll4rg\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290111 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290145 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290155 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-profile-collector-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290177 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-netns\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290207 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290241 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cni-binary-copy\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290273 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bcmr\" (UniqueName: \"kubernetes.io/projected/695549c8-d1fc-429d-9c9f-0a5915dc6074-kube-api-access-7bcmr\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290312 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wrr\" (UniqueName: \"kubernetes.io/projected/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-kube-api-access-p7wrr\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290358 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-trusted-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290391 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290391 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x85fb\" (UniqueName: \"kubernetes.io/projected/88f19cea-60ed-4977-a906-75deec51fc3d-kube-api-access-x85fb\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290436 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d9d71a7a-a751-4de4-9c76-9bac85fe0177-host-slash\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290465 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290511 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cni-binary-copy\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290624 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7333319-3fe6-4b3f-b600-6b6df49fcaff-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290637 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cef33294-81fb-41a2-811d-2565f94514d1-metrics-tls\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290683 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzm2t\" (UniqueName: \"kubernetes.io/projected/34743ce3-5eda-4c60-99cb-640dd067ebdf-kube-api-access-vzm2t\") pod \"node-resolver-zfldn\" (UID: \"34743ce3-5eda-4c60-99cb-640dd067ebdf\") " pod="openshift-dns/node-resolver-zfldn" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290719 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-env-overrides\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290747 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9vmp\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-kube-api-access-z9vmp\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290839 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290852 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7333319-3fe6-4b3f-b600-6b6df49fcaff-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290878 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-modprobe-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290942 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-env-overrides\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290943 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.290982 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291010 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-multus-certs\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291023 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-trusted-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291046 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-policies\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291089 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-serving-cert\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291112 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e0227bc-63f5-48be-95dc-1323a2b2e327-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291114 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291146 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-system-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291241 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-os-release\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291262 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-bin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291277 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-conf-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291275 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291298 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e0227bc-63f5-48be-95dc-1323a2b2e327-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291335 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d9d71a7a-a751-4de4-9c76-9bac85fe0177-iptables-alerter-script\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291356 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291374 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-serving-cert\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291448 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit-dir\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291495 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxtft\" (UniqueName: \"kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-kube-api-access-vxtft\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291610 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdx88\" (UniqueName: \"kubernetes.io/projected/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-kube-api-access-cdx88\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291658 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291668 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-ovnkube-identity-cm\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 21:23:11.292863 master-0 kubenswrapper[38936]: I0216 21:23:11.291718 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e0227bc-63f5-48be-95dc-1323a2b2e327-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.291730 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.291838 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.291898 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b61063e-775e-421d-bf73-a6ef134293a0-metrics-tls\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.291939 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.291979 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292005 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292014 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64qvl\" (UniqueName: \"kubernetes.io/projected/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-kube-api-access-64qvl\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292005 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/88f19cea-60ed-4977-a906-75deec51fc3d-ovnkube-identity-cm\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292046 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292053 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59kpw\" (UniqueName: \"kubernetes.io/projected/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-kube-api-access-59kpw\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292094 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292129 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292161 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c20f63-9bfb-4703-94d5-0c65475e08d1-serving-cert\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292193 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292198 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b61063e-775e-421d-bf73-a6ef134293a0-metrics-tls\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292252 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292463 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292486 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292502 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-conf\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292520 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/da07cd48-b1e8-4ccc-b980-84702cedb042-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-hsz6m\" (UID: \"da07cd48-b1e8-4ccc-b980-84702cedb042\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292538 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292562 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292597 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c20f63-9bfb-4703-94d5-0c65475e08d1-serving-cert\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292684 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b02b740-5698-4e9a-90fe-2873bd0b0958-serving-cert\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292733 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-config\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292760 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-kube-api-access\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292812 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-metrics-tls\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292844 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67qzh\" (UniqueName: \"kubernetes.io/projected/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-kube-api-access-67qzh\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292876 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292894 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv45g\" (UniqueName: \"kubernetes.io/projected/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-kube-api-access-hv45g\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292959 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b02b740-5698-4e9a-90fe-2873bd0b0958-serving-cert\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.292976 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-metrics-certs\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293062 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-k8s-cni-cncf-io\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293134 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxvhm\" (UniqueName: \"kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-kube-api-access-hxvhm\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293222 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkz65\" (UniqueName: \"kubernetes.io/projected/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-kube-api-access-mkz65\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293273 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293337 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pw88\" (UniqueName: \"kubernetes.io/projected/2ab0a907-7abe-4808-ba21-bdda1506eae2-kube-api-access-9pw88\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293429 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7pk6\" (UniqueName: \"kubernetes.io/projected/1b61063e-775e-421d-bf73-a6ef134293a0-kube-api-access-x7pk6\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293473 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrc7l\" (UniqueName: \"kubernetes.io/projected/2e618c5c-52be-4b52-b426-b92555dee9de-kube-api-access-nrc7l\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293518 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-kubernetes\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293556 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-serving-cert\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293592 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293671 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-config-volume\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293695 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7adbe32-b8b9-438e-a2e3-f93146a97424-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293740 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1b61063e-775e-421d-bf73-a6ef134293a0-host-etc-kube\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293779 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293801 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7nmb\" (UniqueName: \"kubernetes.io/projected/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-kube-api-access-g7nmb\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.293821 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7333319-3fe6-4b3f-b600-6b6df49fcaff-config\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294014 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294015 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294106 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxhfs\" (UniqueName: \"kubernetes.io/projected/3403d2bf-b093-4f2e-80aa-73a3d6bcaffb-kube-api-access-gxhfs\") pod \"network-check-source-7d8f4c8c66-w6tqw\" (UID: \"3403d2bf-b093-4f2e-80aa-73a3d6bcaffb\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294152 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7333319-3fe6-4b3f-b600-6b6df49fcaff-config\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294152 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294198 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-env-overrides\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294231 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-stats-auth\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294252 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkdzb\" (UniqueName: \"kubernetes.io/projected/d9d71a7a-a751-4de4-9c76-9bac85fe0177-kube-api-access-jkdzb\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294271 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/e8194cdc-3133-49e2-9579-a747c0bf2b16-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294288 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sd27\" (UniqueName: \"kubernetes.io/projected/a4c9b781-14c0-469c-bb9e-0c3982a04520-kube-api-access-8sd27\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294306 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34743ce3-5eda-4c60-99cb-640dd067ebdf-hosts-file\") pod \"node-resolver-zfldn\" (UID: \"34743ce3-5eda-4c60-99cb-640dd067ebdf\") " pod="openshift-dns/node-resolver-zfldn" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294320 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-env-overrides\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294359 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-client\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294384 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx2kd\" (UniqueName: \"kubernetes.io/projected/c7333319-3fe6-4b3f-b600-6b6df49fcaff-kube-api-access-qx2kd\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294425 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294460 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cef33294-81fb-41a2-811d-2565f94514d1-trusted-ca\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294563 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-config\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294596 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294623 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx4tz\" (UniqueName: \"kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294696 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7adbe32-b8b9-438e-a2e3-f93146a97424-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294823 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-config\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.294828 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cef33294-81fb-41a2-811d-2565f94514d1-trusted-ca\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295272 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vklwz\" (UniqueName: \"kubernetes.io/projected/59237aa6-6250-4619-8ee5-abae59f04b57-kube-api-access-vklwz\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295300 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e062e07-8076-444c-b476-4eb2848e9613-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295325 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295343 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-sys\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295358 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-lib-modules\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295376 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab0a907-7abe-4808-ba21-bdda1506eae2-config\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295394 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-config\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295547 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ab0a907-7abe-4808-ba21-bdda1506eae2-config\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295563 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-config\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295581 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295630 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-node-pullsecrets\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295637 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295691 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/695549c8-d1fc-429d-9c9f-0a5915dc6074-config\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295753 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-default-certificate\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295789 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9w8k\" (UniqueName: \"kubernetes.io/projected/684a8167-6c5b-430f-979e-307e58487611-kube-api-access-s9w8k\") pod \"migrator-5bd989df77-kdb9d\" (UID: \"684a8167-6c5b-430f-979e-307e58487611\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295818 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/484154d0-66c8-4d0e-bf1b-f48d0abfe628-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295755 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e062e07-8076-444c-b476-4eb2848e9613-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295818 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-dir\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295875 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-config\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295925 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.295945 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/695549c8-d1fc-429d-9c9f-0a5915dc6074-config\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296041 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-config\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296042 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-config\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296154 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-var-lib-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296176 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-etc-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296199 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-systemd\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296240 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-config\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296243 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-daemon-config\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296360 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296369 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-daemon-config\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296390 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296452 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-log-socket\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296506 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b02b740-5698-4e9a-90fe-2873bd0b0958-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296531 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sq4t\" (UniqueName: \"kubernetes.io/projected/62935559-041f-4694-9d36-adc809d079b4-kube-api-access-6sq4t\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296580 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296633 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: E0216 21:23:11.296779 38936 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296834 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jt7h\" (UniqueName: \"kubernetes.io/projected/ec7dd4ea-a139-45d4-96a4-506da1567292-kube-api-access-9jt7h\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296863 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/59237aa6-6250-4619-8ee5-abae59f04b57-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296884 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296905 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4djt\" (UniqueName: \"kubernetes.io/projected/d2501eec-47c8-47bc-b0c9-28d94c06075b-kube-api-access-x4djt\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296934 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.296993 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/59237aa6-6250-4619-8ee5-abae59f04b57-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297002 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-metrics-certs\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297067 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297079 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2e618c5c-52be-4b52-b426-b92555dee9de-srv-cert\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297120 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e8194cdc-3133-49e2-9579-a747c0bf2b16-cache\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297162 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-key\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297198 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-host\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297241 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-bound-sa-token\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297269 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e8194cdc-3133-49e2-9579-a747c0bf2b16-cache\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297288 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tklr\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-kube-api-access-5tklr\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297319 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-cache\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297391 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297479 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297481 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-cache\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297527 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzx4s\" (UniqueName: \"kubernetes.io/projected/b1ac9776-54c4-46ce-b898-01c8cf35e593-kube-api-access-vzx4s\") pod \"csi-snapshot-controller-74b6595c6d-pc6x9\" (UID: \"b1ac9776-54c4-46ce-b898-01c8cf35e593\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297558 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.301837 master-0 kubenswrapper[38936]: I0216 21:23:11.297582 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-kubelet\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.297628 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xgcn\" (UniqueName: \"kubernetes.io/projected/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-kube-api-access-7xgcn\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.297765 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2506c282-0b37-4ece-8a0c-885d0b7f7901-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.297803 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-client\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.297829 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.297877 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6wng\" (UniqueName: \"kubernetes.io/projected/484154d0-66c8-4d0e-bf1b-f48d0abfe628-kube-api-access-b6wng\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.297902 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-kubelet\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.297924 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-slash\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.297962 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-trusted-ca-bundle\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.297989 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ec7dd4ea-a139-45d4-96a4-506da1567292-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298014 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298117 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-binary-copy\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298145 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2506c282-0b37-4ece-8a0c-885d0b7f7901-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298158 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-client\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298153 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-service-ca-bundle\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298218 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ec7dd4ea-a139-45d4-96a4-506da1567292-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298234 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59237aa6-6250-4619-8ee5-abae59f04b57-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298278 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298312 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/62935559-041f-4694-9d36-adc809d079b4-cni-binary-copy\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298315 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-serving-ca\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298369 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-etc-kubernetes\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298400 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298428 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfmv6\" (UniqueName: \"kubernetes.io/projected/5e062e07-8076-444c-b476-4eb2848e9613-kube-api-access-dfmv6\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298456 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qd6r\" (UniqueName: \"kubernetes.io/projected/2506c282-0b37-4ece-8a0c-885d0b7f7901-kube-api-access-6qd6r\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298480 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-tmp\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298506 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5e062e07-8076-444c-b476-4eb2848e9613-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298527 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59237aa6-6250-4619-8ee5-abae59f04b57-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298530 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298603 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-cnibin\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298636 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-multus\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298684 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-etcd-ca\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298792 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-tmp\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298834 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2506c282-0b37-4ece-8a0c-885d0b7f7901-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298833 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5e062e07-8076-444c-b476-4eb2848e9613-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298848 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298900 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-script-lib\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298940 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b02b740-5698-4e9a-90fe-2873bd0b0958-config\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.298981 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cnibin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.299012 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-netns\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.299050 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-trusted-ca-bundle\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.299089 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-service-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.299053 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88f19cea-60ed-4977-a906-75deec51fc3d-webhook-cert\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.299123 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-system-cni-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.299221 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b02b740-5698-4e9a-90fe-2873bd0b0958-config\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.299285 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27c20f63-9bfb-4703-94d5-0c65475e08d1-service-ca-bundle\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:23:11.307562 master-0 kubenswrapper[38936]: I0216 21:23:11.302134 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 21:23:11.310291 master-0 kubenswrapper[38936]: I0216 21:23:11.310246 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69785167-b4ae-415b-bdcb-029f62effe78-ovn-node-metrics-cert\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.311418 master-0 kubenswrapper[38936]: I0216 21:23:11.311388 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-k8h7h_3e3ccb9a-4a5d-4a04-8334-b1e303b215a5/kube-multus-additional-cni-plugins/0.log" Feb 16 21:23:11.311482 master-0 kubenswrapper[38936]: I0216 21:23:11.311465 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:11.311580 master-0 kubenswrapper[38936]: I0216 21:23:11.311549 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" event={"ID":"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5","Type":"ContainerDied","Data":"95bb21eb958017bb1c79698309b67c3682dcd7011e9d5aacdb4e7366e93203b8"} Feb 16 21:23:11.311619 master-0 kubenswrapper[38936]: I0216 21:23:11.311579 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95bb21eb958017bb1c79698309b67c3682dcd7011e9d5aacdb4e7366e93203b8" Feb 16 21:23:11.332247 master-0 kubenswrapper[38936]: I0216 21:23:11.321697 38936 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 21:23:11.339506 master-0 kubenswrapper[38936]: I0216 21:23:11.339459 38936 scope.go:117] "RemoveContainer" containerID="2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8" Feb 16 21:23:11.340144 master-0 kubenswrapper[38936]: E0216 21:23:11.340105 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8\": container with ID starting with 2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8 not found: ID does not exist" containerID="2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8" Feb 16 21:23:11.340283 master-0 kubenswrapper[38936]: I0216 21:23:11.340156 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8"} err="failed to get container status \"2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8\": rpc error: code = NotFound desc = could not find container \"2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8\": container with ID starting with 2d8a3bac5bc14187e5d2a390ac77e494ae47030d02fa35967ecd1bb1934d32e8 not found: ID does not exist" Feb 16 21:23:11.341838 master-0 kubenswrapper[38936]: I0216 21:23:11.341804 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 21:23:11.344599 master-0 kubenswrapper[38936]: I0216 21:23:11.344559 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-k8h7h_3e3ccb9a-4a5d-4a04-8334-b1e303b215a5/kube-multus-additional-cni-plugins/0.log" Feb 16 21:23:11.344699 master-0 kubenswrapper[38936]: I0216 21:23:11.344624 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:23:11.350612 master-0 kubenswrapper[38936]: I0216 21:23:11.350286 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/69785167-b4ae-415b-bdcb-029f62effe78-ovnkube-script-lib\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.360960 master-0 kubenswrapper[38936]: I0216 21:23:11.360918 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 21:23:11.365412 master-0 kubenswrapper[38936]: I0216 21:23:11.365365 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d9d71a7a-a751-4de4-9c76-9bac85fe0177-iptables-alerter-script\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 21:23:11.372837 master-0 kubenswrapper[38936]: I0216 21:23:11.372800 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:11.381174 master-0 kubenswrapper[38936]: I0216 21:23:11.381134 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 21:23:11.400224 master-0 kubenswrapper[38936]: I0216 21:23:11.400170 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.400224 master-0 kubenswrapper[38936]: I0216 21:23:11.400218 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-log-socket\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.400416 master-0 kubenswrapper[38936]: I0216 21:23:11.400254 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-catalog-content\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:23:11.400416 master-0 kubenswrapper[38936]: I0216 21:23:11.400377 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.400416 master-0 kubenswrapper[38936]: I0216 21:23:11.400403 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-utilities\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:23:11.400595 master-0 kubenswrapper[38936]: I0216 21:23:11.400382 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-catalog-content\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:23:11.400595 master-0 kubenswrapper[38936]: I0216 21:23:11.400470 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-log-socket\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.400595 master-0 kubenswrapper[38936]: I0216 21:23:11.400482 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:11.400595 master-0 kubenswrapper[38936]: I0216 21:23:11.400538 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.400595 master-0 kubenswrapper[38936]: I0216 21:23:11.400550 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-utilities\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:23:11.400595 master-0 kubenswrapper[38936]: I0216 21:23:11.400560 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.400874 master-0 kubenswrapper[38936]: I0216 21:23:11.400749 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:23:11.400874 master-0 kubenswrapper[38936]: I0216 21:23:11.400848 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.400874 master-0 kubenswrapper[38936]: I0216 21:23:11.400845 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:11.401008 master-0 kubenswrapper[38936]: I0216 21:23:11.400880 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:11.401008 master-0 kubenswrapper[38936]: I0216 21:23:11.400881 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-webhook-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:11.401008 master-0 kubenswrapper[38936]: I0216 21:23:11.400775 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.401008 master-0 kubenswrapper[38936]: I0216 21:23:11.400948 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-trusted-ca\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:11.401008 master-0 kubenswrapper[38936]: I0216 21:23:11.400988 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-auth-proxy-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:23:11.401257 master-0 kubenswrapper[38936]: I0216 21:23:11.401020 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldzxc\" (UniqueName: \"kubernetes.io/projected/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-kube-api-access-ldzxc\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 21:23:11.401257 master-0 kubenswrapper[38936]: I0216 21:23:11.401052 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx8bf\" (UniqueName: \"kubernetes.io/projected/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-kube-api-access-wx8bf\") pod \"cluster-storage-operator-75b869db96-g4w5m\" (UID: \"aa2e9bbc-3962-45f5-a7cc-2dc059409e70\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 21:23:11.401257 master-0 kubenswrapper[38936]: I0216 21:23:11.401144 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0186fdbf-d367-4bc6-816a-bda2816b599e-serving-cert\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:11.401257 master-0 kubenswrapper[38936]: I0216 21:23:11.401172 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:11.401257 master-0 kubenswrapper[38936]: I0216 21:23:11.401199 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-images\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 21:23:11.401464 master-0 kubenswrapper[38936]: I0216 21:23:11.401278 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-host\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.401464 master-0 kubenswrapper[38936]: I0216 21:23:11.401331 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.401464 master-0 kubenswrapper[38936]: I0216 21:23:11.401355 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-textfile\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.401464 master-0 kubenswrapper[38936]: I0216 21:23:11.401384 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:11.401464 master-0 kubenswrapper[38936]: I0216 21:23:11.401401 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.401464 master-0 kubenswrapper[38936]: I0216 21:23:11.401407 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.401464 master-0 kubenswrapper[38936]: I0216 21:23:11.401460 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-textfile\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.401464 master-0 kubenswrapper[38936]: I0216 21:23:11.401470 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.401789 master-0 kubenswrapper[38936]: I0216 21:23:11.401434 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-host\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.401982 master-0 kubenswrapper[38936]: I0216 21:23:11.401901 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:23:11.402046 master-0 kubenswrapper[38936]: I0216 21:23:11.401993 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-kubelet\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.402046 master-0 kubenswrapper[38936]: I0216 21:23:11.402023 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:11.402131 master-0 kubenswrapper[38936]: I0216 21:23:11.402055 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-kubelet\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.402131 master-0 kubenswrapper[38936]: I0216 21:23:11.402077 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-slash\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.402131 master-0 kubenswrapper[38936]: I0216 21:23:11.402103 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-wtmp\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.402131 master-0 kubenswrapper[38936]: I0216 21:23:11.402129 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22pl9\" (UniqueName: \"kubernetes.io/projected/8d56b871-a53a-4928-8967-a33ea9dcec2a-kube-api-access-22pl9\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:23:11.402303 master-0 kubenswrapper[38936]: I0216 21:23:11.402154 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.402303 master-0 kubenswrapper[38936]: I0216 21:23:11.402178 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:11.402303 master-0 kubenswrapper[38936]: I0216 21:23:11.402145 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:11.402303 master-0 kubenswrapper[38936]: I0216 21:23:11.402238 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-kubelet\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.402303 master-0 kubenswrapper[38936]: I0216 21:23:11.402226 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq9c5\" (UniqueName: \"kubernetes.io/projected/d8d648c7-b84b-4f43-84c9-903aead0891a-kube-api-access-nq9c5\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:23:11.402509 master-0 kubenswrapper[38936]: I0216 21:23:11.402325 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-slash\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.402509 master-0 kubenswrapper[38936]: I0216 21:23:11.402361 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:11.402509 master-0 kubenswrapper[38936]: I0216 21:23:11.402384 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqkvs\" (UniqueName: \"kubernetes.io/projected/230d9624-2d9d-4036-967b-b530347f05d5-kube-api-access-vqkvs\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:11.402509 master-0 kubenswrapper[38936]: I0216 21:23:11.402368 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 21:23:11.402509 master-0 kubenswrapper[38936]: I0216 21:23:11.402433 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgvx2\" (UniqueName: \"kubernetes.io/projected/913951bb-1702-4b71-862c-a166bc7a62e0-kube-api-access-pgvx2\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:23:11.402509 master-0 kubenswrapper[38936]: I0216 21:23:11.402447 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.402509 master-0 kubenswrapper[38936]: I0216 21:23:11.402459 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-kubelet\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.402509 master-0 kubenswrapper[38936]: I0216 21:23:11.402467 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9615af2-cad5-4705-9c2f-6f3c97026100-serving-cert\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:11.402509 master-0 kubenswrapper[38936]: I0216 21:23:11.402495 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-etc-kubernetes\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402539 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4a9f4f96-ca31-4959-93fe-c094caf8e077-audit-log\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402592 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-etc-kubernetes\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402626 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/065fcd43-1572-4152-b77b-a6b7ab52a081-machine-approver-tls\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402626 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4a9f4f96-ca31-4959-93fe-c094caf8e077-audit-log\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402676 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trcfg\" (UniqueName: \"kubernetes.io/projected/065fcd43-1572-4152-b77b-a6b7ab52a081-kube-api-access-trcfg\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402711 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgjlj\" (UniqueName: \"kubernetes.io/projected/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-kube-api-access-dgjlj\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402732 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-cnibin\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402754 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-cnibin\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402785 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402820 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402837 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402868 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-multus\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402843 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-multus\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.402902 master-0 kubenswrapper[38936]: I0216 21:23:11.402901 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-netns\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.402940 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-system-cni-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.402973 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.402972 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-netns\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.402995 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-system-cni-dir\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403010 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403039 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cnibin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403066 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qcq9\" (UniqueName: \"kubernetes.io/projected/88c9d2fb-763f-4405-8d1a-c39039b41d3b-kube-api-access-8qcq9\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403092 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403108 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-cnibin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403115 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-systemd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403153 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403150 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-systemd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403188 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403202 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/e8194cdc-3133-49e2-9579-a747c0bf2b16-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403244 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-os-release\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403300 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-sys\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403324 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88c9d2fb-763f-4405-8d1a-c39039b41d3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403326 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/62935559-041f-4694-9d36-adc809d079b4-os-release\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:11.403402 master-0 kubenswrapper[38936]: I0216 21:23:11.403373 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-hostroot\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.403999 master-0 kubenswrapper[38936]: I0216 21:23:11.403706 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysconfig\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.403999 master-0 kubenswrapper[38936]: I0216 21:23:11.403781 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-hostroot\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.403999 master-0 kubenswrapper[38936]: I0216 21:23:11.403817 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysconfig\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.403999 master-0 kubenswrapper[38936]: I0216 21:23:11.403830 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.403999 master-0 kubenswrapper[38936]: I0216 21:23:11.403871 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:11.403999 master-0 kubenswrapper[38936]: I0216 21:23:11.403895 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-images\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:23:11.403999 master-0 kubenswrapper[38936]: I0216 21:23:11.403927 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-socket-dir-parent\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.403999 master-0 kubenswrapper[38936]: I0216 21:23:11.403945 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:11.404243 master-0 kubenswrapper[38936]: I0216 21:23:11.404099 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.404243 master-0 kubenswrapper[38936]: I0216 21:23:11.404129 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b7a368-1408-4fc3-ae25-4613b74e7fca-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:23:11.404243 master-0 kubenswrapper[38936]: I0216 21:23:11.404174 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vddxb\" (UniqueName: \"kubernetes.io/projected/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-kube-api-access-vddxb\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:23:11.404243 master-0 kubenswrapper[38936]: I0216 21:23:11.404199 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-config\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:11.404243 master-0 kubenswrapper[38936]: I0216 21:23:11.404223 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.404243 master-0 kubenswrapper[38936]: I0216 21:23:11.404245 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-netd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.404443 master-0 kubenswrapper[38936]: I0216 21:23:11.404266 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbb86\" (UniqueName: \"kubernetes.io/projected/0186fdbf-d367-4bc6-816a-bda2816b599e-kube-api-access-nbb86\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:11.404443 master-0 kubenswrapper[38936]: I0216 21:23:11.404268 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-socket-dir-parent\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.404443 master-0 kubenswrapper[38936]: I0216 21:23:11.404321 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.404443 master-0 kubenswrapper[38936]: I0216 21:23:11.404382 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-netd\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.404443 master-0 kubenswrapper[38936]: I0216 21:23:11.404384 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.404725 master-0 kubenswrapper[38936]: I0216 21:23:11.404494 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vcsp\" (UniqueName: \"kubernetes.io/projected/fb1eac23-18a5-4706-adcd-81d83e04cd12-kube-api-access-8vcsp\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:23:11.404725 master-0 kubenswrapper[38936]: I0216 21:23:11.404535 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:11.404725 master-0 kubenswrapper[38936]: I0216 21:23:11.404568 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-utilities\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:23:11.404725 master-0 kubenswrapper[38936]: I0216 21:23:11.404590 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/e9bd1f48-6d45-4045-b18e-46ce3005d51d-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:11.404725 master-0 kubenswrapper[38936]: I0216 21:23:11.404618 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:11.404725 master-0 kubenswrapper[38936]: I0216 21:23:11.404717 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/e9bd1f48-6d45-4045-b18e-46ce3005d51d-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:11.404977 master-0 kubenswrapper[38936]: I0216 21:23:11.404766 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 21:23:11.404977 master-0 kubenswrapper[38936]: I0216 21:23:11.404793 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853452fb-1035-4f22-8aeb-9043d150e8ca-utilities\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:23:11.404977 master-0 kubenswrapper[38936]: I0216 21:23:11.404816 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 21:23:11.404977 master-0 kubenswrapper[38936]: I0216 21:23:11.404827 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:11.404977 master-0 kubenswrapper[38936]: I0216 21:23:11.404862 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-certs\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:23:11.404977 master-0 kubenswrapper[38936]: I0216 21:23:11.404957 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-root\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.404977 master-0 kubenswrapper[38936]: I0216 21:23:11.404981 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-catalog-content\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:23:11.405281 master-0 kubenswrapper[38936]: I0216 21:23:11.405037 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/230d9624-2d9d-4036-967b-b530347f05d5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:11.405281 master-0 kubenswrapper[38936]: I0216 21:23:11.405053 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-catalog-content\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:23:11.405281 master-0 kubenswrapper[38936]: I0216 21:23:11.405072 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-apiservice-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:11.405281 master-0 kubenswrapper[38936]: I0216 21:23:11.405110 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wckst\" (UniqueName: \"kubernetes.io/projected/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-api-access-wckst\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:11.405281 master-0 kubenswrapper[38936]: I0216 21:23:11.405168 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-g4w5m\" (UID: \"aa2e9bbc-3962-45f5-a7cc-2dc059409e70\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 21:23:11.405281 master-0 kubenswrapper[38936]: I0216 21:23:11.405194 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:11.405281 master-0 kubenswrapper[38936]: I0216 21:23:11.405230 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:11.405281 master-0 kubenswrapper[38936]: I0216 21:23:11.405267 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-auth-proxy-config\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405287 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405329 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-var-lib-kubelet\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405369 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-systemd-units\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405382 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-var-lib-kubelet\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405390 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqkgp\" (UniqueName: \"kubernetes.io/projected/853452fb-1035-4f22-8aeb-9043d150e8ca-kube-api-access-zqkgp\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405397 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-systemd-units\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405429 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405473 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405522 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-bin\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405554 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgj2q\" (UniqueName: \"kubernetes.io/projected/8b648d9e-a892-4951-b0e2-fed6b16273d4-kube-api-access-sgj2q\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405556 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405578 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-cni-bin\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.405596 master-0 kubenswrapper[38936]: I0216 21:23:11.405584 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-run\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.405613 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbfdg\" (UniqueName: \"kubernetes.io/projected/f7b30888-5994-4968-9db6-9533ac60c92e-kube-api-access-fbfdg\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.405641 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-utilities\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.405721 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-run\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.405744 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-utilities\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.405786 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.405823 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.405864 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.405878 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.405907 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-catalog-content\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.405929 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-catalog-content\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.405948 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.406013 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8d648c7-b84b-4f43-84c9-903aead0891a-catalog-content\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.406018 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.406062 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-ovn\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.406086 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.406101 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f275e79f-923c-4d3a-8ed4-084a122ddcf4-catalog-content\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.406120 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-run-ovn\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.406135 master-0 kubenswrapper[38936]: I0216 21:23:11.406133 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406173 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-netns\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406200 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-netns\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406209 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-node-log\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406231 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-node-log\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406260 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcwzq\" (UniqueName: \"kubernetes.io/projected/ce229d27-837d-4a98-80fc-d56877ae39b8-kube-api-access-dcwzq\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406305 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406357 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406444 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88c9d2fb-763f-4405-8d1a-c39039b41d3b-proxy-tls\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406472 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-metrics-client-ca\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406491 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d7d0416-5f50-42bd-826b-92eecf9adcec-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406512 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtrzq\" (UniqueName: \"kubernetes.io/projected/319dc882-e1f5-40f9-99f4-2bae028337e5-kube-api-access-mtrzq\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406544 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406578 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-run-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406608 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406643 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406705 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-node-bootstrap-token\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406728 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-ready\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406748 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d9d71a7a-a751-4de4-9c76-9bac85fe0177-host-slash\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406812 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7d0416-5f50-42bd-826b-92eecf9adcec-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406812 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d9d71a7a-a751-4de4-9c76-9bac85fe0177-host-slash\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 21:23:11.406822 master-0 kubenswrapper[38936]: I0216 21:23:11.406840 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-ready\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.406891 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.406918 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98n4h\" (UniqueName: \"kubernetes.io/projected/a0b7a368-1408-4fc3-ae25-4613b74e7fca-kube-api-access-98n4h\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.406981 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407046 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-modprobe-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407072 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407090 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrc4z\" (UniqueName: \"kubernetes.io/projected/4a9f4f96-ca31-4959-93fe-c094caf8e077-kube-api-access-xrc4z\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407108 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-utilities\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407128 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-multus-certs\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407144 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-os-release\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407149 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407295 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-modprobe-d\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407311 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-os-release\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407375 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce229d27-837d-4a98-80fc-d56877ae39b8-utilities\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407399 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-bin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407443 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-conf-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407476 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-multus-conf-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407482 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-var-lib-cni-bin\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407495 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407456 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-multus-certs\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407523 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmn29\" (UniqueName: \"kubernetes.io/projected/f275e79f-923c-4d3a-8ed4-084a122ddcf4-kube-api-access-cmn29\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407549 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-system-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407568 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmhq9\" (UniqueName: \"kubernetes.io/projected/ff193060-a272-4e4e-990a-83ac410f523d-kube-api-access-wmhq9\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 21:23:11.407575 master-0 kubenswrapper[38936]: I0216 21:23:11.407589 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.407614 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.407635 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25mkq\" (UniqueName: \"kubernetes.io/projected/1d7d0416-5f50-42bd-826b-92eecf9adcec-kube-api-access-25mkq\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.407781 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-system-cni-dir\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.407808 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit-dir\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.407837 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.407863 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.407883 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.407938 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit-dir\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.407971 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.407970 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/277c6354-bff9-407b-ad97-5fdfc7f43116-monitoring-plugin-cert\") pod \"monitoring-plugin-749f8d8bbd-z9ndp\" (UID: \"277c6354-bff9-407b-ad97-5fdfc7f43116\") " pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408011 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408041 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408061 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408123 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408144 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408173 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf2w4\" (UniqueName: \"kubernetes.io/projected/ba294358-051a-4f09-b182-710d3d6778c5-kube-api-access-qf2w4\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408210 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408251 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npfk7\" (UniqueName: \"kubernetes.io/projected/e9615af2-cad5-4705-9c2f-6f3c97026100-kube-api-access-npfk7\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408288 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408324 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408372 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.408384 master-0 kubenswrapper[38936]: I0216 21:23:11.408408 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408438 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408443 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408464 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc47v\" (UniqueName: \"kubernetes.io/projected/1489d1b6-d8a1-453a-bff3-8adfd4335903-kube-api-access-xc47v\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408495 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-conf\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408516 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb1eac23-18a5-4706-adcd-81d83e04cd12-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408687 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-sysctl-conf\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408698 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408799 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408828 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408845 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408848 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408899 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408931 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-k8s-cni-cncf-io\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.408974 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-host-run-k8s-cni-cncf-io\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.409014 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jh6l\" (UniqueName: \"kubernetes.io/projected/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-kube-api-access-6jh6l\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.409045 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.409071 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb1eac23-18a5-4706-adcd-81d83e04cd12-proxy-tls\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.409155 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvw2m\" (UniqueName: \"kubernetes.io/projected/408a9364-3730-4017-b1e4-c85d6a504168-kube-api-access-lvw2m\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.409183 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/88c9d2fb-763f-4405-8d1a-c39039b41d3b-rootfs\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.409210 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-kubernetes\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.409292 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-kubernetes\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.409315 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1b61063e-775e-421d-bf73-a6ef134293a0-host-etc-kube\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 21:23:11.409357 master-0 kubenswrapper[38936]: I0216 21:23:11.409366 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff193060-a272-4e4e-990a-83ac410f523d-proxy-tls\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409388 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409420 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34743ce3-5eda-4c60-99cb-640dd067ebdf-hosts-file\") pod \"node-resolver-zfldn\" (UID: \"34743ce3-5eda-4c60-99cb-640dd067ebdf\") " pod="openshift-dns/node-resolver-zfldn" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409420 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/1b61063e-775e-421d-bf73-a6ef134293a0-host-etc-kube\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409464 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409472 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34743ce3-5eda-4c60-99cb-640dd067ebdf-hosts-file\") pod \"node-resolver-zfldn\" (UID: \"34743ce3-5eda-4c60-99cb-640dd067ebdf\") " pod="openshift-dns/node-resolver-zfldn" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409485 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409495 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409503 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409572 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba294358-051a-4f09-b182-710d3d6778c5-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409605 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/55095f4f-cac0-456c-9ccc-45869392408c-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-d7lfl\" (UID: \"55095f4f-cac0-456c-9ccc-45869392408c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409634 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7b30888-5994-4968-9db6-9533ac60c92e-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409687 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/319dc882-e1f5-40f9-99f4-2bae028337e5-tmpfs\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409715 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-config\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409743 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-lib-modules\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409768 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/319dc882-e1f5-40f9-99f4-2bae028337e5-tmpfs\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409825 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-sys\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409843 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-lib-modules\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409850 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/e9615af2-cad5-4705-9c2f-6f3c97026100-snapshots\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409939 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/e9615af2-cad5-4705-9c2f-6f3c97026100-snapshots\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409950 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-sys\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.409995 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.410036 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:11.410593 master-0 kubenswrapper[38936]: I0216 21:23:11.410060 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-node-pullsecrets\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.411542 master-0 kubenswrapper[38936]: I0216 21:23:11.410109 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-var-lib-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.411542 master-0 kubenswrapper[38936]: I0216 21:23:11.410125 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d2501eec-47c8-47bc-b0c9-28d94c06075b-node-pullsecrets\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.411542 master-0 kubenswrapper[38936]: I0216 21:23:11.411335 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:23:11.411542 master-0 kubenswrapper[38936]: I0216 21:23:11.411452 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-dir\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:11.411542 master-0 kubenswrapper[38936]: I0216 21:23:11.410161 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-var-lib-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.411542 master-0 kubenswrapper[38936]: E0216 21:23:11.411476 38936 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: object "openshift-multus"/"cni-sysctl-allowlist" not registered Feb 16 21:23:11.411542 master-0 kubenswrapper[38936]: E0216 21:23:11.411534 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-cni-sysctl-allowlist podName:3e3ccb9a-4a5d-4a04-8334-b1e303b215a5 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:11.91151911 +0000 UTC m=+22.263522462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-k8h7h" (UID: "3e3ccb9a-4a5d-4a04-8334-b1e303b215a5") : object "openshift-multus"/"cni-sysctl-allowlist" not registered Feb 16 21:23:11.411542 master-0 kubenswrapper[38936]: I0216 21:23:11.411529 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-dir\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:11.411880 master-0 kubenswrapper[38936]: I0216 21:23:11.411483 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnc6\" (UniqueName: \"kubernetes.io/projected/55095f4f-cac0-456c-9ccc-45869392408c-kube-api-access-7hnc6\") pod \"cluster-samples-operator-f8cbff74c-d7lfl\" (UID: \"55095f4f-cac0-456c-9ccc-45869392408c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 21:23:11.411880 master-0 kubenswrapper[38936]: I0216 21:23:11.411583 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-etc-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.411880 master-0 kubenswrapper[38936]: I0216 21:23:11.411607 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:11.411880 master-0 kubenswrapper[38936]: I0216 21:23:11.411629 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:11.411880 master-0 kubenswrapper[38936]: I0216 21:23:11.411673 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/230d9624-2d9d-4036-967b-b530347f05d5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:11.411880 master-0 kubenswrapper[38936]: I0216 21:23:11.411686 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/69785167-b4ae-415b-bdcb-029f62effe78-etc-openvswitch\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:11.413211 master-0 kubenswrapper[38936]: I0216 21:23:11.413165 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-systemd\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.413382 master-0 kubenswrapper[38936]: I0216 21:23:11.413334 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-etc-systemd\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:11.423996 master-0 kubenswrapper[38936]: I0216 21:23:11.421377 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 21:23:11.430405 master-0 kubenswrapper[38936]: I0216 21:23:11.430283 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-cabundle\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 21:23:11.441838 master-0 kubenswrapper[38936]: I0216 21:23:11.441778 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 21:23:11.448470 master-0 kubenswrapper[38936]: I0216 21:23:11.448433 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-signing-key\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 21:23:11.461407 master-0 kubenswrapper[38936]: I0216 21:23:11.461374 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 21:23:11.481244 master-0 kubenswrapper[38936]: I0216 21:23:11.481165 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 21:23:11.505694 master-0 kubenswrapper[38936]: I0216 21:23:11.501887 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 21:23:11.509320 master-0 kubenswrapper[38936]: I0216 21:23:11.509290 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-serving-ca\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.514370 master-0 kubenswrapper[38936]: I0216 21:23:11.514343 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-cni-sysctl-allowlist\") pod \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " Feb 16 21:23:11.514504 master-0 kubenswrapper[38936]: I0216 21:23:11.514483 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-ready\") pod \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " Feb 16 21:23:11.514699 master-0 kubenswrapper[38936]: I0216 21:23:11.514680 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/88c9d2fb-763f-4405-8d1a-c39039b41d3b-rootfs\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:23:11.514886 master-0 kubenswrapper[38936]: I0216 21:23:11.514871 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/230d9624-2d9d-4036-967b-b530347f05d5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:11.514935 master-0 kubenswrapper[38936]: I0216 21:23:11.514927 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:23:11.514974 master-0 kubenswrapper[38936]: I0216 21:23:11.514949 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-trusted-ca\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:11.515006 master-0 kubenswrapper[38936]: I0216 21:23:11.514994 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0186fdbf-d367-4bc6-816a-bda2816b599e-serving-cert\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:11.515073 master-0 kubenswrapper[38936]: I0216 21:23:11.515059 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-wtmp\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.515111 master-0 kubenswrapper[38936]: I0216 21:23:11.515087 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:11.515215 master-0 kubenswrapper[38936]: I0216 21:23:11.515203 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-sys\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.515271 master-0 kubenswrapper[38936]: I0216 21:23:11.515260 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-config\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:11.515411 master-0 kubenswrapper[38936]: I0216 21:23:11.515386 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-wtmp\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.515444 master-0 kubenswrapper[38936]: I0216 21:23:11.515425 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:11.515472 master-0 kubenswrapper[38936]: I0216 21:23:11.515448 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-sys\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.515578 master-0 kubenswrapper[38936]: I0216 21:23:11.515560 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:23:11.515625 master-0 kubenswrapper[38936]: I0216 21:23:11.515602 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/88c9d2fb-763f-4405-8d1a-c39039b41d3b-rootfs\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:23:11.515719 master-0 kubenswrapper[38936]: I0216 21:23:11.515679 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" (UID: "3e3ccb9a-4a5d-4a04-8334-b1e303b215a5"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:23:11.515900 master-0 kubenswrapper[38936]: I0216 21:23:11.515877 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/230d9624-2d9d-4036-967b-b530347f05d5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:11.515948 master-0 kubenswrapper[38936]: I0216 21:23:11.515929 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbb86\" (UniqueName: \"kubernetes.io/projected/0186fdbf-d367-4bc6-816a-bda2816b599e-kube-api-access-nbb86\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:11.516104 master-0 kubenswrapper[38936]: I0216 21:23:11.516074 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-root\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.516146 master-0 kubenswrapper[38936]: I0216 21:23:11.516109 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-ready" (OuterVolumeSpecName: "ready") pod "3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" (UID: "3e3ccb9a-4a5d-4a04-8334-b1e303b215a5"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:23:11.516197 master-0 kubenswrapper[38936]: I0216 21:23:11.516176 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-root\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:11.516237 master-0 kubenswrapper[38936]: I0216 21:23:11.516224 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:11.516272 master-0 kubenswrapper[38936]: I0216 21:23:11.516255 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:11.516506 master-0 kubenswrapper[38936]: I0216 21:23:11.516484 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:11.516562 master-0 kubenswrapper[38936]: I0216 21:23:11.516548 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/277c6354-bff9-407b-ad97-5fdfc7f43116-monitoring-plugin-cert\") pod \"monitoring-plugin-749f8d8bbd-z9ndp\" (UID: \"277c6354-bff9-407b-ad97-5fdfc7f43116\") " pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" Feb 16 21:23:11.516874 master-0 kubenswrapper[38936]: I0216 21:23:11.516852 38936 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-ready\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:11.516913 master-0 kubenswrapper[38936]: I0216 21:23:11.516874 38936 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:11.521246 master-0 kubenswrapper[38936]: I0216 21:23:11.521200 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 21:23:11.527046 master-0 kubenswrapper[38936]: I0216 21:23:11.527011 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-serving-cert\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.543231 master-0 kubenswrapper[38936]: I0216 21:23:11.543168 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 21:23:11.548221 master-0 kubenswrapper[38936]: I0216 21:23:11.548189 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-encryption-config\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.551813 master-0 kubenswrapper[38936]: I0216 21:23:11.551780 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.551894 master-0 kubenswrapper[38936]: I0216 21:23:11.551849 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 16 21:23:11.552457 master-0 kubenswrapper[38936]: I0216 21:23:11.552422 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:11.558202 master-0 kubenswrapper[38936]: I0216 21:23:11.558123 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:11.561362 master-0 kubenswrapper[38936]: I0216 21:23:11.561319 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 21:23:11.563366 master-0 kubenswrapper[38936]: I0216 21:23:11.563332 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.563366 master-0 kubenswrapper[38936]: I0216 21:23:11.563351 38936 scope.go:117] "RemoveContainer" containerID="43047bae0f2dd351891e082f8932168325d435e7cb25fa3bae528c469bde358f" Feb 16 21:23:11.563471 master-0 kubenswrapper[38936]: I0216 21:23:11.563390 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.563471 master-0 kubenswrapper[38936]: I0216 21:23:11.563400 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.563471 master-0 kubenswrapper[38936]: I0216 21:23:11.563434 38936 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:11.565456 master-0 kubenswrapper[38936]: I0216 21:23:11.565427 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d2501eec-47c8-47bc-b0c9-28d94c06075b-etcd-client\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.580763 master-0 kubenswrapper[38936]: I0216 21:23:11.580719 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 21:23:11.583256 master-0 kubenswrapper[38936]: I0216 21:23:11.583209 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-audit\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.597784 master-0 kubenswrapper[38936]: I0216 21:23:11.597739 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:11.597921 master-0 kubenswrapper[38936]: I0216 21:23:11.597797 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:11.597921 master-0 kubenswrapper[38936]: I0216 21:23:11.597874 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:11.598009 master-0 kubenswrapper[38936]: I0216 21:23:11.597932 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:11.610270 master-0 kubenswrapper[38936]: I0216 21:23:11.610210 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 21:23:11.611323 master-0 kubenswrapper[38936]: I0216 21:23:11.611289 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-trusted-ca-bundle\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.618422 master-0 kubenswrapper[38936]: I0216 21:23:11.617561 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir\") pod \"1f8a26db-5a90-4da9-9074-33256ef17100\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " Feb 16 21:23:11.618422 master-0 kubenswrapper[38936]: I0216 21:23:11.617626 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-tuning-conf-dir\") pod \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " Feb 16 21:23:11.618422 master-0 kubenswrapper[38936]: I0216 21:23:11.617685 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock\") pod \"1f8a26db-5a90-4da9-9074-33256ef17100\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " Feb 16 21:23:11.618422 master-0 kubenswrapper[38936]: I0216 21:23:11.617683 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1f8a26db-5a90-4da9-9074-33256ef17100" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:23:11.618422 master-0 kubenswrapper[38936]: I0216 21:23:11.617748 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" (UID: "3e3ccb9a-4a5d-4a04-8334-b1e303b215a5"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:23:11.618422 master-0 kubenswrapper[38936]: I0216 21:23:11.617847 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock" (OuterVolumeSpecName: "var-lock") pod "1f8a26db-5a90-4da9-9074-33256ef17100" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:23:11.624536 master-0 kubenswrapper[38936]: I0216 21:23:11.624504 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 21:23:11.626159 master-0 kubenswrapper[38936]: I0216 21:23:11.626131 38936 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:11.626235 master-0 kubenswrapper[38936]: I0216 21:23:11.626168 38936 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:11.626235 master-0 kubenswrapper[38936]: I0216 21:23:11.626183 38936 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1f8a26db-5a90-4da9-9074-33256ef17100-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:11.629594 master-0 kubenswrapper[38936]: I0216 21:23:11.629566 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-image-import-ca\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.634697 master-0 kubenswrapper[38936]: I0216 21:23:11.634621 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-z46jt_b27de289-c0f9-47ff-aac6-15b7bc1b178a/multus-admission-controller/0.log" Feb 16 21:23:11.634807 master-0 kubenswrapper[38936]: I0216 21:23:11.634722 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 21:23:11.641751 master-0 kubenswrapper[38936]: I0216 21:23:11.641719 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 21:23:11.644004 master-0 kubenswrapper[38936]: I0216 21:23:11.643983 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-config-volume\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 21:23:11.661036 master-0 kubenswrapper[38936]: I0216 21:23:11.661001 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 21:23:11.665505 master-0 kubenswrapper[38936]: I0216 21:23:11.665425 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-metrics-tls\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 21:23:11.681811 master-0 kubenswrapper[38936]: I0216 21:23:11.681732 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 21:23:11.701309 master-0 kubenswrapper[38936]: I0216 21:23:11.701251 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 21:23:11.721422 master-0 kubenswrapper[38936]: I0216 21:23:11.721347 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 21:23:11.741699 master-0 kubenswrapper[38936]: I0216 21:23:11.741632 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 21:23:11.762541 master-0 kubenswrapper[38936]: I0216 21:23:11.761026 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 21:23:11.765484 master-0 kubenswrapper[38936]: I0216 21:23:11.765443 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/e8194cdc-3133-49e2-9579-a747c0bf2b16-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.780878 master-0 kubenswrapper[38936]: I0216 21:23:11.780352 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 21:23:11.784105 master-0 kubenswrapper[38936]: I0216 21:23:11.784057 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2501eec-47c8-47bc-b0c9-28d94c06075b-config\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:11.803105 master-0 kubenswrapper[38936]: I0216 21:23:11.802762 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 21:23:11.824496 master-0 kubenswrapper[38936]: I0216 21:23:11.824397 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 21:23:11.826820 master-0 kubenswrapper[38936]: I0216 21:23:11.826766 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-default-certificate\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:11.848991 master-0 kubenswrapper[38936]: I0216 21:23:11.848905 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 21:23:11.856687 master-0 kubenswrapper[38936]: I0216 21:23:11.856609 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:11.861806 master-0 kubenswrapper[38936]: I0216 21:23:11.861755 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 21:23:11.864195 master-0 kubenswrapper[38936]: I0216 21:23:11.864135 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-metrics-certs\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:11.881168 master-0 kubenswrapper[38936]: I0216 21:23:11.881102 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 21:23:11.881452 master-0 kubenswrapper[38936]: I0216 21:23:11.881395 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 21:23:11.901242 master-0 kubenswrapper[38936]: I0216 21:23:11.901194 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 21:23:11.905301 master-0 kubenswrapper[38936]: I0216 21:23:11.905261 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-stats-auth\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:11.921539 master-0 kubenswrapper[38936]: I0216 21:23:11.921493 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 21:23:11.929986 master-0 kubenswrapper[38936]: I0216 21:23:11.929226 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-service-ca-bundle\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:11.941339 master-0 kubenswrapper[38936]: I0216 21:23:11.941256 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 21:23:11.961956 master-0 kubenswrapper[38936]: I0216 21:23:11.961880 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 21:23:11.982474 master-0 kubenswrapper[38936]: I0216 21:23:11.982434 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 21:23:11.983161 master-0 kubenswrapper[38936]: I0216 21:23:11.983107 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/da07cd48-b1e8-4ccc-b980-84702cedb042-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-hsz6m\" (UID: \"da07cd48-b1e8-4ccc-b980-84702cedb042\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" Feb 16 21:23:12.000965 master-0 kubenswrapper[38936]: I0216 21:23:12.000905 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 21:23:12.010814 master-0 kubenswrapper[38936]: I0216 21:23:12.010770 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/302156cc-9dca-4a66-9e6a-ba2c7e738c92-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-8pqbl\" (UID: \"302156cc-9dca-4a66-9e6a-ba2c7e738c92\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 21:23:12.021475 master-0 kubenswrapper[38936]: I0216 21:23:12.021442 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 21:23:12.041330 master-0 kubenswrapper[38936]: I0216 21:23:12.041271 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 21:23:12.085320 master-0 kubenswrapper[38936]: I0216 21:23:12.085251 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 21:23:12.090608 master-0 kubenswrapper[38936]: I0216 21:23:12.090556 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 21:23:12.099358 master-0 kubenswrapper[38936]: I0216 21:23:12.099282 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:12.102702 master-0 kubenswrapper[38936]: I0216 21:23:12.102630 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 21:23:12.120629 master-0 kubenswrapper[38936]: I0216 21:23:12.120564 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 21:23:12.126021 master-0 kubenswrapper[38936]: I0216 21:23:12.125949 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-serving-cert\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:12.141864 master-0 kubenswrapper[38936]: I0216 21:23:12.141801 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 21:23:12.165865 master-0 kubenswrapper[38936]: I0216 21:23:12.165730 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 21:23:12.171771 master-0 kubenswrapper[38936]: I0216 21:23:12.171619 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-serving-cert\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:12.181429 master-0 kubenswrapper[38936]: I0216 21:23:12.181354 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 21:23:12.190256 master-0 kubenswrapper[38936]: I0216 21:23:12.190069 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-encryption-config\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:12.201958 master-0 kubenswrapper[38936]: I0216 21:23:12.201769 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 21:23:12.214693 master-0 kubenswrapper[38936]: I0216 21:23:12.214568 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-client\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:12.222028 master-0 kubenswrapper[38936]: I0216 21:23:12.221930 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 21:23:12.231802 master-0 kubenswrapper[38936]: I0216 21:23:12.231726 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-audit-policies\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:12.239035 master-0 kubenswrapper[38936]: I0216 21:23:12.238967 38936 request.go:700] Waited for 1.014701271s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0 Feb 16 21:23:12.241338 master-0 kubenswrapper[38936]: I0216 21:23:12.241271 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 21:23:12.249775 master-0 kubenswrapper[38936]: I0216 21:23:12.249604 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-etcd-serving-ca\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:12.260809 master-0 kubenswrapper[38936]: I0216 21:23:12.260739 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 21:23:12.270001 master-0 kubenswrapper[38936]: I0216 21:23:12.269927 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd49e653-3b42-4950-8f5f-2b2ecb683678-trusted-ca-bundle\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:12.280947 master-0 kubenswrapper[38936]: I0216 21:23:12.280867 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 21:23:12.289963 master-0 kubenswrapper[38936]: E0216 21:23:12.289846 38936 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.289963 master-0 kubenswrapper[38936]: E0216 21:23:12.289846 38936 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.289963 master-0 kubenswrapper[38936]: E0216 21:23:12.290000 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-service-ca podName:a5d4ac48-aed3-46b9-9b2a-d741121e05b4 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.789973552 +0000 UTC m=+23.141976914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-service-ca") pod "cluster-version-operator-649c4f5445-n994s" (UID: "a5d4ac48-aed3-46b9-9b2a-d741121e05b4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.290307 master-0 kubenswrapper[38936]: E0216 21:23:12.290048 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs podName:b27de289-c0f9-47ff-aac6-15b7bc1b178a nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.790034525 +0000 UTC m=+23.142037887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs") pod "multus-admission-controller-7c64d55f8-z46jt" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.301283 master-0 kubenswrapper[38936]: I0216 21:23:12.301226 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 21:23:12.320013 master-0 kubenswrapper[38936]: I0216 21:23:12.319914 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-z46jt_b27de289-c0f9-47ff-aac6-15b7bc1b178a/multus-admission-controller/0.log" Feb 16 21:23:12.321036 master-0 kubenswrapper[38936]: I0216 21:23:12.320231 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 21:23:12.322632 master-0 kubenswrapper[38936]: I0216 21:23:12.322581 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 21:23:12.322800 master-0 kubenswrapper[38936]: I0216 21:23:12.322769 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_e300ec3a145c1339a627607b3c84b99d/kube-apiserver-check-endpoints/0.log" Feb 16 21:23:12.328541 master-0 kubenswrapper[38936]: I0216 21:23:12.328496 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:23:12.328753 master-0 kubenswrapper[38936]: I0216 21:23:12.328669 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:12.341126 master-0 kubenswrapper[38936]: I0216 21:23:12.341066 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 21:23:12.343731 master-0 kubenswrapper[38936]: I0216 21:23:12.343697 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") pod \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " Feb 16 21:23:12.347512 master-0 kubenswrapper[38936]: I0216 21:23:12.347360 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "b27de289-c0f9-47ff-aac6-15b7bc1b178a" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:23:12.370344 master-0 kubenswrapper[38936]: I0216 21:23:12.370271 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/55095f4f-cac0-456c-9ccc-45869392408c-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-d7lfl\" (UID: \"55095f4f-cac0-456c-9ccc-45869392408c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 21:23:12.372489 master-0 kubenswrapper[38936]: I0216 21:23:12.371455 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 21:23:12.390324 master-0 kubenswrapper[38936]: I0216 21:23:12.390250 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 21:23:12.399161 master-0 kubenswrapper[38936]: I0216 21:23:12.399104 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 21:23:12.401091 master-0 kubenswrapper[38936]: E0216 21:23:12.400977 38936 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.401682 master-0 kubenswrapper[38936]: E0216 21:23:12.401603 38936 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.401682 master-0 kubenswrapper[38936]: E0216 21:23:12.401614 38936 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.401682 master-0 kubenswrapper[38936]: E0216 21:23:12.401590 38936 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.401682 master-0 kubenswrapper[38936]: E0216 21:23:12.401692 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls podName:e9bd1f48-6d45-4045-b18e-46ce3005d51d nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.901672829 +0000 UTC m=+23.253676191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-n467n" (UID: "e9bd1f48-6d45-4045-b18e-46ce3005d51d") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.401682 master-0 kubenswrapper[38936]: E0216 21:23:12.401577 38936 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.402196 master-0 kubenswrapper[38936]: E0216 21:23:12.401710 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-webhook-cert podName:319dc882-e1f5-40f9-99f4-2bae028337e5 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.901703779 +0000 UTC m=+23.253707141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-webhook-cert") pod "packageserver-78d4b6b677-npmx4" (UID: "319dc882-e1f5-40f9-99f4-2bae028337e5") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.402196 master-0 kubenswrapper[38936]: E0216 21:23:12.401515 38936 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.402196 master-0 kubenswrapper[38936]: E0216 21:23:12.401725 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-images podName:230d9624-2d9d-4036-967b-b530347f05d5 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.90171984 +0000 UTC m=+23.253723202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-images") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" (UID: "230d9624-2d9d-4036-967b-b530347f05d5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.402196 master-0 kubenswrapper[38936]: E0216 21:23:12.401872 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-images podName:ff193060-a272-4e4e-990a-83ac410f523d nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.901843923 +0000 UTC m=+23.253847325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-images") pod "machine-config-operator-84976bb859-jwh5s" (UID: "ff193060-a272-4e4e-990a-83ac410f523d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.402196 master-0 kubenswrapper[38936]: E0216 21:23:12.401910 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-auth-proxy-config podName:065fcd43-1572-4152-b77b-a6b7ab52a081 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.901897024 +0000 UTC m=+23.253900426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-auth-proxy-config") pod "machine-approver-8569dd85ff-kvhs4" (UID: "065fcd43-1572-4152-b77b-a6b7ab52a081") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.402698 master-0 kubenswrapper[38936]: E0216 21:23:12.402223 38936 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.402698 master-0 kubenswrapper[38936]: E0216 21:23:12.402293 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert podName:0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.902275135 +0000 UTC m=+23.254278537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert") pod "ingress-canary-l44qd" (UID: "0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.402698 master-0 kubenswrapper[38936]: I0216 21:23:12.402228 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 21:23:12.402698 master-0 kubenswrapper[38936]: E0216 21:23:12.402542 38936 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.402698 master-0 kubenswrapper[38936]: E0216 21:23:12.402600 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-auth-proxy-config podName:230d9624-2d9d-4036-967b-b530347f05d5 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.902587613 +0000 UTC m=+23.254590995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" (UID: "230d9624-2d9d-4036-967b-b530347f05d5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.402698 master-0 kubenswrapper[38936]: E0216 21:23:12.402611 38936 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.402698 master-0 kubenswrapper[38936]: E0216 21:23:12.402709 38936 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.403203 master-0 kubenswrapper[38936]: E0216 21:23:12.402713 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9615af2-cad5-4705-9c2f-6f3c97026100-serving-cert podName:e9615af2-cad5-4705-9c2f-6f3c97026100 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.902695006 +0000 UTC m=+23.254698408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e9615af2-cad5-4705-9c2f-6f3c97026100-serving-cert") pod "insights-operator-cb4f7b4cf-h8f7q" (UID: "e9615af2-cad5-4705-9c2f-6f3c97026100") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.403203 master-0 kubenswrapper[38936]: E0216 21:23:12.402783 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/065fcd43-1572-4152-b77b-a6b7ab52a081-machine-approver-tls podName:065fcd43-1572-4152-b77b-a6b7ab52a081 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.902774498 +0000 UTC m=+23.254777860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/065fcd43-1572-4152-b77b-a6b7ab52a081-machine-approver-tls") pod "machine-approver-8569dd85ff-kvhs4" (UID: "065fcd43-1572-4152-b77b-a6b7ab52a081") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.403453 master-0 kubenswrapper[38936]: E0216 21:23:12.403331 38936 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.403453 master-0 kubenswrapper[38936]: E0216 21:23:12.403376 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config podName:408a9364-3730-4017-b1e4-c85d6a504168 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.903363894 +0000 UTC m=+23.255367476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config") pod "controller-manager-6998cd96fb-bgcb2" (UID: "408a9364-3730-4017-b1e4-c85d6a504168") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.403453 master-0 kubenswrapper[38936]: E0216 21:23:12.403391 38936 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.403453 master-0 kubenswrapper[38936]: E0216 21:23:12.403396 38936 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.403453 master-0 kubenswrapper[38936]: E0216 21:23:12.403420 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-config podName:065fcd43-1572-4152-b77b-a6b7ab52a081 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.903411915 +0000 UTC m=+23.255415277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-config") pod "machine-approver-8569dd85ff-kvhs4" (UID: "065fcd43-1572-4152-b77b-a6b7ab52a081") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.403453 master-0 kubenswrapper[38936]: E0216 21:23:12.403462 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-trusted-ca-bundle podName:e9615af2-cad5-4705-9c2f-6f3c97026100 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.903446307 +0000 UTC m=+23.255449709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-h8f7q" (UID: "e9615af2-cad5-4705-9c2f-6f3c97026100") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.404006 master-0 kubenswrapper[38936]: E0216 21:23:12.403748 38936 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.404006 master-0 kubenswrapper[38936]: E0216 21:23:12.403816 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/88c9d2fb-763f-4405-8d1a-c39039b41d3b-mcd-auth-proxy-config podName:88c9d2fb-763f-4405-8d1a-c39039b41d3b nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.903800436 +0000 UTC m=+23.255803808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/88c9d2fb-763f-4405-8d1a-c39039b41d3b-mcd-auth-proxy-config") pod "machine-config-daemon-jb6tl" (UID: "88c9d2fb-763f-4405-8d1a-c39039b41d3b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.404458 master-0 kubenswrapper[38936]: E0216 21:23:12.404417 38936 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.404578 master-0 kubenswrapper[38936]: E0216 21:23:12.404496 38936 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.404578 master-0 kubenswrapper[38936]: E0216 21:23:12.404531 38936 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.404818 master-0 kubenswrapper[38936]: E0216 21:23:12.404421 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle podName:4a9f4f96-ca31-4959-93fe-c094caf8e077 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.904377351 +0000 UTC m=+23.256380753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle") pod "metrics-server-76c9c896c-pz2bk" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.405004 master-0 kubenswrapper[38936]: E0216 21:23:12.404974 38936 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.405116 master-0 kubenswrapper[38936]: E0216 21:23:12.405007 38936 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.405223 master-0 kubenswrapper[38936]: E0216 21:23:12.404973 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a0b7a368-1408-4fc3-ae25-4613b74e7fca-metrics-client-ca podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.904947847 +0000 UTC m=+23.256951259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/a0b7a368-1408-4fc3-ae25-4613b74e7fca-metrics-client-ca") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.405393 master-0 kubenswrapper[38936]: E0216 21:23:12.405370 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls podName:7d6eb694-9a3d-49d1-bbc1-74ba4450d673 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.905348707 +0000 UTC m=+23.257352119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls") pod "node-exporter-ctvb2" (UID: "7d6eb694-9a3d-49d1-bbc1-74ba4450d673") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.405566 master-0 kubenswrapper[38936]: E0216 21:23:12.405417 38936 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.405566 master-0 kubenswrapper[38936]: E0216 21:23:12.405548 38936 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.405844 master-0 kubenswrapper[38936]: E0216 21:23:12.405540 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-images podName:ba294358-051a-4f09-b182-710d3d6778c5 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.905523782 +0000 UTC m=+23.257527184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-images") pod "machine-api-operator-bd7dd5c46-27jwb" (UID: "ba294358-051a-4f09-b182-710d3d6778c5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.406019 master-0 kubenswrapper[38936]: E0216 21:23:12.405431 38936 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.406141 master-0 kubenswrapper[38936]: E0216 21:23:12.405485 38936 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.406141 master-0 kubenswrapper[38936]: E0216 21:23:12.406062 38936 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.406141 master-0 kubenswrapper[38936]: E0216 21:23:12.405491 38936 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.406141 master-0 kubenswrapper[38936]: E0216 21:23:12.406126 38936 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.406494 master-0 kubenswrapper[38936]: E0216 21:23:12.406466 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cluster-baremetal-operator-tls podName:8b648d9e-a892-4951-b0e2-fed6b16273d4 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.905992294 +0000 UTC m=+23.257995696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-xwptz" (UID: "8b648d9e-a892-4951-b0e2-fed6b16273d4") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.406706 master-0 kubenswrapper[38936]: E0216 21:23:12.406680 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-certs podName:913951bb-1702-4b71-862c-a166bc7a62e0 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.906635852 +0000 UTC m=+23.258639254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-certs") pod "machine-config-server-qvctv" (UID: "913951bb-1702-4b71-862c-a166bc7a62e0") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.406910 master-0 kubenswrapper[38936]: E0216 21:23:12.406699 38936 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.406910 master-0 kubenswrapper[38936]: E0216 21:23:12.406903 38936 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.407100 master-0 kubenswrapper[38936]: E0216 21:23:12.406884 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-kube-rbac-proxy-config podName:f7b30888-5994-4968-9db6-9533ac60c92e nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.906865488 +0000 UTC m=+23.258868890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-s4j9z" (UID: "f7b30888-5994-4968-9db6-9533ac60c92e") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.407281 master-0 kubenswrapper[38936]: E0216 21:23:12.406720 38936 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.407378 master-0 kubenswrapper[38936]: E0216 21:23:12.406736 38936 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.407378 master-0 kubenswrapper[38936]: E0216 21:23:12.406755 38936 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.407378 master-0 kubenswrapper[38936]: E0216 21:23:12.406793 38936 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.407378 master-0 kubenswrapper[38936]: E0216 21:23:12.406796 38936 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.406801 38936 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.406825 38936 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407248 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-cluster-storage-operator-serving-cert podName:aa2e9bbc-3962-45f5-a7cc-2dc059409e70 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907229237 +0000 UTC m=+23.259232639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-g4w5m" (UID: "aa2e9bbc-3962-45f5-a7cc-2dc059409e70") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407474 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-apiservice-cert podName:319dc882-e1f5-40f9-99f4-2bae028337e5 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907459013 +0000 UTC m=+23.259462605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-apiservice-cert") pod "packageserver-78d4b6b677-npmx4" (UID: "319dc882-e1f5-40f9-99f4-2bae028337e5") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407492 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls podName:f7b30888-5994-4968-9db6-9533ac60c92e nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907483734 +0000 UTC m=+23.259487346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-s4j9z" (UID: "f7b30888-5994-4968-9db6-9533ac60c92e") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407513 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230d9624-2d9d-4036-967b-b530347f05d5-cloud-controller-manager-operator-tls podName:230d9624-2d9d-4036-967b-b530347f05d5 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907501634 +0000 UTC m=+23.259505256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/230d9624-2d9d-4036-967b-b530347f05d5-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" (UID: "230d9624-2d9d-4036-967b-b530347f05d5") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407531 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-auth-proxy-config podName:ff193060-a272-4e4e-990a-83ac410f523d nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907522805 +0000 UTC m=+23.259526407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-auth-proxy-config") pod "machine-config-operator-84976bb859-jwh5s" (UID: "ff193060-a272-4e4e-990a-83ac410f523d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407551 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-service-ca-bundle podName:e9615af2-cad5-4705-9c2f-6f3c97026100 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907542365 +0000 UTC m=+23.259545987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-service-ca-bundle") pod "insights-operator-cb4f7b4cf-h8f7q" (UID: "e9615af2-cad5-4705-9c2f-6f3c97026100") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407570 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-metrics-client-ca podName:7d6eb694-9a3d-49d1-bbc1-74ba4450d673 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907561076 +0000 UTC m=+23.259564678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-metrics-client-ca") pod "node-exporter-ctvb2" (UID: "7d6eb694-9a3d-49d1-bbc1-74ba4450d673") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407589 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d7d0416-5f50-42bd-826b-92eecf9adcec-cert podName:1d7d0416-5f50-42bd-826b-92eecf9adcec nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907579876 +0000 UTC m=+23.259583478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1d7d0416-5f50-42bd-826b-92eecf9adcec-cert") pod "cluster-autoscaler-operator-67fd9768b5-557vd" (UID: "1d7d0416-5f50-42bd-826b-92eecf9adcec") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407604 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88c9d2fb-763f-4405-8d1a-c39039b41d3b-proxy-tls podName:88c9d2fb-763f-4405-8d1a-c39039b41d3b nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907597297 +0000 UTC m=+23.259600889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/88c9d2fb-763f-4405-8d1a-c39039b41d3b-proxy-tls") pod "machine-config-daemon-jb6tl" (UID: "88c9d2fb-763f-4405-8d1a-c39039b41d3b") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407620 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d7d0416-5f50-42bd-826b-92eecf9adcec-auth-proxy-config podName:1d7d0416-5f50-42bd-826b-92eecf9adcec nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907613177 +0000 UTC m=+23.259616769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/1d7d0416-5f50-42bd-826b-92eecf9adcec-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-557vd" (UID: "1d7d0416-5f50-42bd-826b-92eecf9adcec") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407636 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-custom-resource-state-configmap podName:e9bd1f48-6d45-4045-b18e-46ce3005d51d nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907627788 +0000 UTC m=+23.259631390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-n467n" (UID: "e9bd1f48-6d45-4045-b18e-46ce3005d51d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407671 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles podName:4a9f4f96-ca31-4959-93fe-c094caf8e077 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907663939 +0000 UTC m=+23.259667531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles") pod "metrics-server-76c9c896c-pz2bk" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.407639 master-0 kubenswrapper[38936]: E0216 21:23:12.407689 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-kube-rbac-proxy-config podName:e9bd1f48-6d45-4045-b18e-46ce3005d51d nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.90768255 +0000 UTC m=+23.259686182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-n467n" (UID: "e9bd1f48-6d45-4045-b18e-46ce3005d51d") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.408760 master-0 kubenswrapper[38936]: E0216 21:23:12.407705 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca podName:1489d1b6-d8a1-453a-bff3-8adfd4335903 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.90769907 +0000 UTC m=+23.259702692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca") pod "route-controller-manager-85d99cfd66-kjw24" (UID: "1489d1b6-d8a1-453a-bff3-8adfd4335903") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.408760 master-0 kubenswrapper[38936]: E0216 21:23:12.407722 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-node-bootstrap-token podName:913951bb-1702-4b71-862c-a166bc7a62e0 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.907714851 +0000 UTC m=+23.259718453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-node-bootstrap-token") pod "machine-config-server-qvctv" (UID: "913951bb-1702-4b71-862c-a166bc7a62e0") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.409046 master-0 kubenswrapper[38936]: E0216 21:23:12.409013 38936 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.409046 master-0 kubenswrapper[38936]: E0216 21:23:12.409043 38936 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.409212 master-0 kubenswrapper[38936]: E0216 21:23:12.409059 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-config podName:8b648d9e-a892-4951-b0e2-fed6b16273d4 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909048687 +0000 UTC m=+23.261052049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-config") pod "cluster-baremetal-operator-7bc947fc7d-xwptz" (UID: "8b648d9e-a892-4951-b0e2-fed6b16273d4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.409212 master-0 kubenswrapper[38936]: E0216 21:23:12.409087 38936 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.409212 master-0 kubenswrapper[38936]: E0216 21:23:12.409091 38936 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.409212 master-0 kubenswrapper[38936]: E0216 21:23:12.409097 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs podName:8d56b871-a53a-4928-8967-a33ea9dcec2a nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909082767 +0000 UTC m=+23.261086139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs") pod "multus-admission-controller-6d678b8d67-shtrw" (UID: "8d56b871-a53a-4928-8967-a33ea9dcec2a") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.409212 master-0 kubenswrapper[38936]: E0216 21:23:12.409137 38936 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.409212 master-0 kubenswrapper[38936]: E0216 21:23:12.409156 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles podName:408a9364-3730-4017-b1e4-c85d6a504168 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909145059 +0000 UTC m=+23.261148651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles") pod "controller-manager-6998cd96fb-bgcb2" (UID: "408a9364-3730-4017-b1e4-c85d6a504168") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.409212 master-0 kubenswrapper[38936]: E0216 21:23:12.409174 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.9091665 +0000 UTC m=+23.261170122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.409212 master-0 kubenswrapper[38936]: E0216 21:23:12.409194 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fb1eac23-18a5-4706-adcd-81d83e04cd12-mcc-auth-proxy-config podName:fb1eac23-18a5-4706-adcd-81d83e04cd12 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.90918466 +0000 UTC m=+23.261188262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/fb1eac23-18a5-4706-adcd-81d83e04cd12-mcc-auth-proxy-config") pod "machine-config-controller-686c884b4d-6j2l4" (UID: "fb1eac23-18a5-4706-adcd-81d83e04cd12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.409212 master-0 kubenswrapper[38936]: E0216 21:23:12.409199 38936 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.409212 master-0 kubenswrapper[38936]: E0216 21:23:12.409196 38936 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.409212 master-0 kubenswrapper[38936]: E0216 21:23:12.409199 38936 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409253 38936 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409257 38936 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409208 38936 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409221 38936 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409220 38936 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409266 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config podName:7d6eb694-9a3d-49d1-bbc1-74ba4450d673 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909227371 +0000 UTC m=+23.261230743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config") pod "node-exporter-ctvb2" (UID: "7d6eb694-9a3d-49d1-bbc1-74ba4450d673") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409380 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-images podName:8b648d9e-a892-4951-b0e2-fed6b16273d4 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909359865 +0000 UTC m=+23.261363277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-images") pod "cluster-baremetal-operator-7bc947fc7d-xwptz" (UID: "8b648d9e-a892-4951-b0e2-fed6b16273d4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409405 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs podName:4a9f4f96-ca31-4959-93fe-c094caf8e077 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909392806 +0000 UTC m=+23.261396208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs") pod "metrics-server-76c9c896c-pz2bk" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409430 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-metrics-client-ca podName:e9bd1f48-6d45-4045-b18e-46ce3005d51d nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909416716 +0000 UTC m=+23.261420118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-metrics-client-ca") pod "kube-state-metrics-7cc9598d54-n467n" (UID: "e9bd1f48-6d45-4045-b18e-46ce3005d51d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409453 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb1eac23-18a5-4706-adcd-81d83e04cd12-proxy-tls podName:fb1eac23-18a5-4706-adcd-81d83e04cd12 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909441187 +0000 UTC m=+23.261444589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/fb1eac23-18a5-4706-adcd-81d83e04cd12-proxy-tls") pod "machine-config-controller-686c884b4d-6j2l4" (UID: "fb1eac23-18a5-4706-adcd-81d83e04cd12") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409476 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-kube-rbac-proxy-config podName:a0b7a368-1408-4fc3-ae25-4613b74e7fca nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909465687 +0000 UTC m=+23.261469089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-9xc4n" (UID: "a0b7a368-1408-4fc3-ae25-4613b74e7fca") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409505 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca podName:408a9364-3730-4017-b1e4-c85d6a504168 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909492168 +0000 UTC m=+23.261495570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca") pod "controller-manager-6998cd96fb-bgcb2" (UID: "408a9364-3730-4017-b1e4-c85d6a504168") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409527 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert podName:1489d1b6-d8a1-453a-bff3-8adfd4335903 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909517829 +0000 UTC m=+23.261521231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert") pod "route-controller-manager-85d99cfd66-kjw24" (UID: "1489d1b6-d8a1-453a-bff3-8adfd4335903") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409560 38936 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409606 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff193060-a272-4e4e-990a-83ac410f523d-proxy-tls podName:ff193060-a272-4e4e-990a-83ac410f523d nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909592291 +0000 UTC m=+23.261595693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/ff193060-a272-4e4e-990a-83ac410f523d-proxy-tls") pod "machine-config-operator-84976bb859-jwh5s" (UID: "ff193060-a272-4e4e-990a-83ac410f523d") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409639 38936 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409711 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cert podName:8b648d9e-a892-4951-b0e2-fed6b16273d4 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909699433 +0000 UTC m=+23.261702835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cert") pod "cluster-baremetal-operator-7bc947fc7d-xwptz" (UID: "8b648d9e-a892-4951-b0e2-fed6b16273d4") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409746 38936 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409793 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba294358-051a-4f09-b182-710d3d6778c5-machine-api-operator-tls podName:ba294358-051a-4f09-b182-710d3d6778c5 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909780806 +0000 UTC m=+23.261784208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/ba294358-051a-4f09-b182-710d3d6778c5-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-27jwb" (UID: "ba294358-051a-4f09-b182-710d3d6778c5") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409852 38936 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409891 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-config podName:ba294358-051a-4f09-b182-710d3d6778c5 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.909878088 +0000 UTC m=+23.261881500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-config") pod "machine-api-operator-bd7dd5c46-27jwb" (UID: "ba294358-051a-4f09-b182-710d3d6778c5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.409933 38936 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.410275 master-0 kubenswrapper[38936]: E0216 21:23:12.410017 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7b30888-5994-4968-9db6-9533ac60c92e-metrics-client-ca podName:f7b30888-5994-4968-9db6-9533ac60c92e nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.90996392 +0000 UTC m=+23.261967562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/f7b30888-5994-4968-9db6-9533ac60c92e-metrics-client-ca") pod "openshift-state-metrics-546cc7d765-s4j9z" (UID: "f7b30888-5994-4968-9db6-9533ac60c92e") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.412068 master-0 kubenswrapper[38936]: E0216 21:23:12.410480 38936 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.412068 master-0 kubenswrapper[38936]: E0216 21:23:12.410535 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config podName:1489d1b6-d8a1-453a-bff3-8adfd4335903 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.910522576 +0000 UTC m=+23.262525948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config") pod "route-controller-manager-85d99cfd66-kjw24" (UID: "1489d1b6-d8a1-453a-bff3-8adfd4335903") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.412068 master-0 kubenswrapper[38936]: I0216 21:23:12.410823 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 21:23:12.412510 master-0 kubenswrapper[38936]: E0216 21:23:12.412343 38936 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.412510 master-0 kubenswrapper[38936]: E0216 21:23:12.412464 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls podName:4a9f4f96-ca31-4959-93fe-c094caf8e077 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.912436517 +0000 UTC m=+23.264440069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls") pod "metrics-server-76c9c896c-pz2bk" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.413551 master-0 kubenswrapper[38936]: E0216 21:23:12.413475 38936 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-6thqgv1l637aa: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.413695 master-0 kubenswrapper[38936]: E0216 21:23:12.413589 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle podName:4a9f4f96-ca31-4959-93fe-c094caf8e077 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.913564077 +0000 UTC m=+23.265567449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle") pod "metrics-server-76c9c896c-pz2bk" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.415183 master-0 kubenswrapper[38936]: E0216 21:23:12.414712 38936 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.415183 master-0 kubenswrapper[38936]: E0216 21:23:12.414787 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert podName:408a9364-3730-4017-b1e4-c85d6a504168 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:12.914774259 +0000 UTC m=+23.266777631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert") pod "controller-manager-6998cd96fb-bgcb2" (UID: "408a9364-3730-4017-b1e4-c85d6a504168") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.422318 master-0 kubenswrapper[38936]: I0216 21:23:12.422152 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 21:23:12.442780 master-0 kubenswrapper[38936]: I0216 21:23:12.442719 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 21:23:12.449448 master-0 kubenswrapper[38936]: I0216 21:23:12.449355 38936 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b27de289-c0f9-47ff-aac6-15b7bc1b178a-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:12.471255 master-0 kubenswrapper[38936]: I0216 21:23:12.461498 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 21:23:12.482174 master-0 kubenswrapper[38936]: I0216 21:23:12.482112 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 21:23:12.500583 master-0 kubenswrapper[38936]: I0216 21:23:12.500515 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 21:23:12.515678 master-0 kubenswrapper[38936]: E0216 21:23:12.515602 38936 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.515678 master-0 kubenswrapper[38936]: E0216 21:23:12.515625 38936 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.516007 master-0 kubenswrapper[38936]: E0216 21:23:12.515725 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-config podName:0186fdbf-d367-4bc6-816a-bda2816b599e nodeName:}" failed. No retries permitted until 2026-02-16 21:23:13.015701487 +0000 UTC m=+23.367704859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-config") pod "console-operator-7777d5cc66-fgr2n" (UID: "0186fdbf-d367-4bc6-816a-bda2816b599e") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.516007 master-0 kubenswrapper[38936]: E0216 21:23:12.515784 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-trusted-ca podName:0186fdbf-d367-4bc6-816a-bda2816b599e nodeName:}" failed. No retries permitted until 2026-02-16 21:23:13.015743889 +0000 UTC m=+23.367747291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-trusted-ca") pod "console-operator-7777d5cc66-fgr2n" (UID: "0186fdbf-d367-4bc6-816a-bda2816b599e") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:12.516177 master-0 kubenswrapper[38936]: E0216 21:23:12.516143 38936 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.516342 master-0 kubenswrapper[38936]: E0216 21:23:12.516322 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0186fdbf-d367-4bc6-816a-bda2816b599e-serving-cert podName:0186fdbf-d367-4bc6-816a-bda2816b599e nodeName:}" failed. No retries permitted until 2026-02-16 21:23:13.016293853 +0000 UTC m=+23.368297285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0186fdbf-d367-4bc6-816a-bda2816b599e-serving-cert") pod "console-operator-7777d5cc66-fgr2n" (UID: "0186fdbf-d367-4bc6-816a-bda2816b599e") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.516751 master-0 kubenswrapper[38936]: E0216 21:23:12.516709 38936 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.516854 master-0 kubenswrapper[38936]: E0216 21:23:12.516792 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/277c6354-bff9-407b-ad97-5fdfc7f43116-monitoring-plugin-cert podName:277c6354-bff9-407b-ad97-5fdfc7f43116 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:13.016781556 +0000 UTC m=+23.368784918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/277c6354-bff9-407b-ad97-5fdfc7f43116-monitoring-plugin-cert") pod "monitoring-plugin-749f8d8bbd-z9ndp" (UID: "277c6354-bff9-407b-ad97-5fdfc7f43116") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:12.520917 master-0 kubenswrapper[38936]: I0216 21:23:12.520881 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 21:23:12.543119 master-0 kubenswrapper[38936]: I0216 21:23:12.543078 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 21:23:12.560988 master-0 kubenswrapper[38936]: I0216 21:23:12.560948 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 21:23:12.582305 master-0 kubenswrapper[38936]: I0216 21:23:12.582205 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 21:23:12.601574 master-0 kubenswrapper[38936]: I0216 21:23:12.601502 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 21:23:12.620727 master-0 kubenswrapper[38936]: I0216 21:23:12.620633 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 21:23:12.641032 master-0 kubenswrapper[38936]: I0216 21:23:12.640960 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 21:23:12.660590 master-0 kubenswrapper[38936]: I0216 21:23:12.660556 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 21:23:12.680769 master-0 kubenswrapper[38936]: I0216 21:23:12.680721 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-f8prb" Feb 16 21:23:12.706532 master-0 kubenswrapper[38936]: I0216 21:23:12.706486 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 21:23:12.722032 master-0 kubenswrapper[38936]: I0216 21:23:12.721852 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mz2hl" Feb 16 21:23:12.741069 master-0 kubenswrapper[38936]: I0216 21:23:12.740967 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 21:23:12.762305 master-0 kubenswrapper[38936]: I0216 21:23:12.762195 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 21:23:12.780763 master-0 kubenswrapper[38936]: I0216 21:23:12.780704 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 21:23:12.801899 master-0 kubenswrapper[38936]: I0216 21:23:12.801858 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 21:23:12.821282 master-0 kubenswrapper[38936]: I0216 21:23:12.821214 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-6xcjr" Feb 16 21:23:12.841371 master-0 kubenswrapper[38936]: I0216 21:23:12.841093 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 21:23:12.859868 master-0 kubenswrapper[38936]: I0216 21:23:12.859781 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-service-ca\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:12.860529 master-0 kubenswrapper[38936]: I0216 21:23:12.860450 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-service-ca\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:12.861833 master-0 kubenswrapper[38936]: I0216 21:23:12.861789 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 21:23:12.881877 master-0 kubenswrapper[38936]: I0216 21:23:12.881798 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-xg8bz" Feb 16 21:23:12.902183 master-0 kubenswrapper[38936]: I0216 21:23:12.902116 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x77sl" Feb 16 21:23:12.921359 master-0 kubenswrapper[38936]: I0216 21:23:12.921294 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 21:23:12.941470 master-0 kubenswrapper[38936]: I0216 21:23:12.941391 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 21:23:12.961191 master-0 kubenswrapper[38936]: I0216 21:23:12.961135 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 21:23:12.961782 master-0 kubenswrapper[38936]: I0216 21:23:12.961732 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:12.961861 master-0 kubenswrapper[38936]: I0216 21:23:12.961815 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9615af2-cad5-4705-9c2f-6f3c97026100-serving-cert\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:12.962208 master-0 kubenswrapper[38936]: I0216 21:23:12.962116 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/065fcd43-1572-4152-b77b-a6b7ab52a081-machine-approver-tls\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:23:12.962266 master-0 kubenswrapper[38936]: I0216 21:23:12.962234 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9615af2-cad5-4705-9c2f-6f3c97026100-serving-cert\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:12.962358 master-0 kubenswrapper[38936]: I0216 21:23:12.962314 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:23:12.962450 master-0 kubenswrapper[38936]: I0216 21:23:12.962392 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:12.962513 master-0 kubenswrapper[38936]: I0216 21:23:12.962486 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:12.962616 master-0 kubenswrapper[38936]: I0216 21:23:12.962577 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88c9d2fb-763f-4405-8d1a-c39039b41d3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:23:12.962776 master-0 kubenswrapper[38936]: I0216 21:23:12.962734 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-images\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:23:12.962893 master-0 kubenswrapper[38936]: I0216 21:23:12.962869 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/88c9d2fb-763f-4405-8d1a-c39039b41d3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:23:12.962945 master-0 kubenswrapper[38936]: I0216 21:23:12.962876 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b7a368-1408-4fc3-ae25-4613b74e7fca-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:23:12.962985 master-0 kubenswrapper[38936]: I0216 21:23:12.962969 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-images\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:23:12.963072 master-0 kubenswrapper[38936]: I0216 21:23:12.963027 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:12.963233 master-0 kubenswrapper[38936]: I0216 21:23:12.963189 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:12.963415 master-0 kubenswrapper[38936]: I0216 21:23:12.963368 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:12.963524 master-0 kubenswrapper[38936]: I0216 21:23:12.963490 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-certs\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:23:12.963591 master-0 kubenswrapper[38936]: I0216 21:23:12.963568 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:12.964126 master-0 kubenswrapper[38936]: I0216 21:23:12.963994 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/230d9624-2d9d-4036-967b-b530347f05d5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:12.964874 master-0 kubenswrapper[38936]: I0216 21:23:12.964212 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-apiservice-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:12.964993 master-0 kubenswrapper[38936]: I0216 21:23:12.964814 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-apiservice-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:12.965101 master-0 kubenswrapper[38936]: I0216 21:23:12.965049 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-g4w5m\" (UID: \"aa2e9bbc-3962-45f5-a7cc-2dc059409e70\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 21:23:12.965222 master-0 kubenswrapper[38936]: I0216 21:23:12.965181 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-auth-proxy-config\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 21:23:12.965329 master-0 kubenswrapper[38936]: I0216 21:23:12.965262 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:23:12.965445 master-0 kubenswrapper[38936]: I0216 21:23:12.965306 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-g4w5m\" (UID: \"aa2e9bbc-3962-45f5-a7cc-2dc059409e70\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 21:23:12.965495 master-0 kubenswrapper[38936]: I0216 21:23:12.965474 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-auth-proxy-config\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 21:23:12.965561 master-0 kubenswrapper[38936]: I0216 21:23:12.965528 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:23:12.965609 master-0 kubenswrapper[38936]: I0216 21:23:12.965586 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:12.965831 master-0 kubenswrapper[38936]: I0216 21:23:12.965785 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:12.965951 master-0 kubenswrapper[38936]: I0216 21:23:12.965804 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9615af2-cad5-4705-9c2f-6f3c97026100-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:12.966004 master-0 kubenswrapper[38936]: I0216 21:23:12.965899 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:12.966086 master-0 kubenswrapper[38936]: I0216 21:23:12.966055 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88c9d2fb-763f-4405-8d1a-c39039b41d3b-proxy-tls\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:23:12.966238 master-0 kubenswrapper[38936]: I0216 21:23:12.966206 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-metrics-client-ca\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:12.966901 master-0 kubenswrapper[38936]: I0216 21:23:12.966850 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d7d0416-5f50-42bd-826b-92eecf9adcec-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 21:23:12.966965 master-0 kubenswrapper[38936]: I0216 21:23:12.966420 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d7d0416-5f50-42bd-826b-92eecf9adcec-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 21:23:12.967071 master-0 kubenswrapper[38936]: I0216 21:23:12.967031 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:12.967277 master-0 kubenswrapper[38936]: I0216 21:23:12.967235 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:12.967324 master-0 kubenswrapper[38936]: I0216 21:23:12.967290 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-node-bootstrap-token\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:23:12.967463 master-0 kubenswrapper[38936]: I0216 21:23:12.967387 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7d0416-5f50-42bd-826b-92eecf9adcec-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 21:23:12.969874 master-0 kubenswrapper[38936]: I0216 21:23:12.969820 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1d7d0416-5f50-42bd-826b-92eecf9adcec-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 21:23:12.970086 master-0 kubenswrapper[38936]: I0216 21:23:12.970041 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:23:12.970319 master-0 kubenswrapper[38936]: I0216 21:23:12.970279 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:23:12.970576 master-0 kubenswrapper[38936]: I0216 21:23:12.970527 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:12.970879 master-0 kubenswrapper[38936]: I0216 21:23:12.970805 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:12.971574 master-0 kubenswrapper[38936]: I0216 21:23:12.971237 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:12.971855 master-0 kubenswrapper[38936]: I0216 21:23:12.971811 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:12.971932 master-0 kubenswrapper[38936]: I0216 21:23:12.971901 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:12.972007 master-0 kubenswrapper[38936]: I0216 21:23:12.971977 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:12.972125 master-0 kubenswrapper[38936]: I0216 21:23:12.972058 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb1eac23-18a5-4706-adcd-81d83e04cd12-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:23:12.972125 master-0 kubenswrapper[38936]: I0216 21:23:12.972103 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b648d9e-a892-4951-b0e2-fed6b16273d4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:12.972216 master-0 kubenswrapper[38936]: I0216 21:23:12.972161 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:12.972286 master-0 kubenswrapper[38936]: I0216 21:23:12.972252 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb1eac23-18a5-4706-adcd-81d83e04cd12-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:23:12.972336 master-0 kubenswrapper[38936]: I0216 21:23:12.972270 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:12.972383 master-0 kubenswrapper[38936]: I0216 21:23:12.972348 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:12.972427 master-0 kubenswrapper[38936]: I0216 21:23:12.972379 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:23:12.972467 master-0 kubenswrapper[38936]: I0216 21:23:12.972431 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:12.972467 master-0 kubenswrapper[38936]: I0216 21:23:12.972454 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb1eac23-18a5-4706-adcd-81d83e04cd12-proxy-tls\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:23:12.972658 master-0 kubenswrapper[38936]: I0216 21:23:12.972607 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff193060-a272-4e4e-990a-83ac410f523d-proxy-tls\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 21:23:12.972745 master-0 kubenswrapper[38936]: I0216 21:23:12.972719 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:12.972796 master-0 kubenswrapper[38936]: I0216 21:23:12.972781 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba294358-051a-4f09-b182-710d3d6778c5-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:23:12.972844 master-0 kubenswrapper[38936]: I0216 21:23:12.972828 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7b30888-5994-4968-9db6-9533ac60c92e-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:23:12.972890 master-0 kubenswrapper[38936]: I0216 21:23:12.972856 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff193060-a272-4e4e-990a-83ac410f523d-proxy-tls\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 21:23:12.972890 master-0 kubenswrapper[38936]: I0216 21:23:12.972860 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-config\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:23:12.972975 master-0 kubenswrapper[38936]: I0216 21:23:12.972922 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:12.973023 master-0 kubenswrapper[38936]: I0216 21:23:12.973003 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b648d9e-a892-4951-b0e2-fed6b16273d4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:12.973085 master-0 kubenswrapper[38936]: I0216 21:23:12.973058 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba294358-051a-4f09-b182-710d3d6778c5-config\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:23:12.973134 master-0 kubenswrapper[38936]: I0216 21:23:12.973085 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ba294358-051a-4f09-b182-710d3d6778c5-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:23:12.973134 master-0 kubenswrapper[38936]: I0216 21:23:12.973089 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:12.973213 master-0 kubenswrapper[38936]: I0216 21:23:12.973175 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:12.973336 master-0 kubenswrapper[38936]: I0216 21:23:12.973291 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:12.973407 master-0 kubenswrapper[38936]: I0216 21:23:12.973373 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-webhook-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:12.973474 master-0 kubenswrapper[38936]: I0216 21:23:12.973444 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-auth-proxy-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:23:12.973611 master-0 kubenswrapper[38936]: I0216 21:23:12.973576 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:12.973673 master-0 kubenswrapper[38936]: I0216 21:23:12.973629 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-images\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 21:23:12.973720 master-0 kubenswrapper[38936]: I0216 21:23:12.973692 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/319dc882-e1f5-40f9-99f4-2bae028337e5-webhook-cert\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:12.973762 master-0 kubenswrapper[38936]: I0216 21:23:12.973729 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:12.973834 master-0 kubenswrapper[38936]: I0216 21:23:12.973802 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:23:12.981777 master-0 kubenswrapper[38936]: I0216 21:23:12.981718 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-457l2" Feb 16 21:23:13.003191 master-0 kubenswrapper[38936]: I0216 21:23:13.003106 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-knxzz" Feb 16 21:23:13.022956 master-0 kubenswrapper[38936]: I0216 21:23:13.022866 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 21:23:13.027074 master-0 kubenswrapper[38936]: I0216 21:23:13.027000 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/88c9d2fb-763f-4405-8d1a-c39039b41d3b-proxy-tls\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:23:13.041467 master-0 kubenswrapper[38936]: I0216 21:23:13.041402 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 21:23:13.044804 master-0 kubenswrapper[38936]: I0216 21:23:13.044743 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff193060-a272-4e4e-990a-83ac410f523d-images\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 21:23:13.062009 master-0 kubenswrapper[38936]: I0216 21:23:13.061928 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-2t7md" Feb 16 21:23:13.074684 master-0 kubenswrapper[38936]: I0216 21:23:13.074621 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-trusted-ca\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:13.074882 master-0 kubenswrapper[38936]: I0216 21:23:13.074861 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0186fdbf-d367-4bc6-816a-bda2816b599e-serving-cert\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:13.075282 master-0 kubenswrapper[38936]: I0216 21:23:13.075253 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-config\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:13.076062 master-0 kubenswrapper[38936]: I0216 21:23:13.076033 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/277c6354-bff9-407b-ad97-5fdfc7f43116-monitoring-plugin-cert\") pod \"monitoring-plugin-749f8d8bbd-z9ndp\" (UID: \"277c6354-bff9-407b-ad97-5fdfc7f43116\") " pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" Feb 16 21:23:13.082011 master-0 kubenswrapper[38936]: I0216 21:23:13.081959 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 21:23:13.085208 master-0 kubenswrapper[38936]: I0216 21:23:13.085130 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb1eac23-18a5-4706-adcd-81d83e04cd12-proxy-tls\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:23:13.102249 master-0 kubenswrapper[38936]: I0216 21:23:13.102193 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-gpdzh" Feb 16 21:23:13.121414 master-0 kubenswrapper[38936]: I0216 21:23:13.121334 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 21:23:13.140973 master-0 kubenswrapper[38936]: I0216 21:23:13.140902 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 21:23:13.145988 master-0 kubenswrapper[38936]: I0216 21:23:13.144702 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-cert\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:23:13.161395 master-0 kubenswrapper[38936]: I0216 21:23:13.161345 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 21:23:13.181115 master-0 kubenswrapper[38936]: I0216 21:23:13.181060 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lhkmd" Feb 16 21:23:13.201599 master-0 kubenswrapper[38936]: I0216 21:23:13.201537 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 21:23:13.203298 master-0 kubenswrapper[38936]: I0216 21:23:13.203232 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:13.203298 master-0 kubenswrapper[38936]: I0216 21:23:13.203283 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7b30888-5994-4968-9db6-9533ac60c92e-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:23:13.203478 master-0 kubenswrapper[38936]: I0216 21:23:13.203423 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b7a368-1408-4fc3-ae25-4613b74e7fca-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:23:13.207414 master-0 kubenswrapper[38936]: I0216 21:23:13.207370 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-metrics-client-ca\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:13.222408 master-0 kubenswrapper[38936]: I0216 21:23:13.222269 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 21:23:13.228486 master-0 kubenswrapper[38936]: I0216 21:23:13.228402 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-node-bootstrap-token\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:23:13.239186 master-0 kubenswrapper[38936]: I0216 21:23:13.239127 38936 request.go:700] Waited for 2.003335635s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Feb 16 21:23:13.241218 master-0 kubenswrapper[38936]: I0216 21:23:13.241181 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 21:23:13.244696 master-0 kubenswrapper[38936]: I0216 21:23:13.244633 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/913951bb-1702-4b71-862c-a166bc7a62e0-certs\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:23:13.262049 master-0 kubenswrapper[38936]: I0216 21:23:13.261960 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-pt7pr" Feb 16 21:23:13.281328 master-0 kubenswrapper[38936]: I0216 21:23:13.281258 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 21:23:13.291196 master-0 kubenswrapper[38936]: I0216 21:23:13.291142 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:23:13.301254 master-0 kubenswrapper[38936]: I0216 21:23:13.301174 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 21:23:13.303111 master-0 kubenswrapper[38936]: I0216 21:23:13.303053 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0b7a368-1408-4fc3-ae25-4613b74e7fca-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:23:13.322112 master-0 kubenswrapper[38936]: I0216 21:23:13.322054 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 21:23:13.324474 master-0 kubenswrapper[38936]: I0216 21:23:13.324387 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-auth-proxy-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:23:13.342685 master-0 kubenswrapper[38936]: I0216 21:23:13.342582 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vqmt8" Feb 16 21:23:13.361761 master-0 kubenswrapper[38936]: I0216 21:23:13.361697 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 21:23:13.363174 master-0 kubenswrapper[38936]: I0216 21:23:13.363118 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/065fcd43-1572-4152-b77b-a6b7ab52a081-machine-approver-tls\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:23:13.381214 master-0 kubenswrapper[38936]: I0216 21:23:13.381151 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 21:23:13.383013 master-0 kubenswrapper[38936]: I0216 21:23:13.382953 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065fcd43-1572-4152-b77b-a6b7ab52a081-config\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:23:13.401104 master-0 kubenswrapper[38936]: I0216 21:23:13.401062 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 21:23:13.421065 master-0 kubenswrapper[38936]: I0216 21:23:13.421014 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 21:23:13.442155 master-0 kubenswrapper[38936]: I0216 21:23:13.442048 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 21:23:13.443210 master-0 kubenswrapper[38936]: I0216 21:23:13.443151 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:13.461380 master-0 kubenswrapper[38936]: I0216 21:23:13.461316 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 21:23:13.481756 master-0 kubenswrapper[38936]: I0216 21:23:13.481625 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 21:23:13.502103 master-0 kubenswrapper[38936]: I0216 21:23:13.502056 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 21:23:13.503147 master-0 kubenswrapper[38936]: I0216 21:23:13.503079 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:13.520942 master-0 kubenswrapper[38936]: I0216 21:23:13.520884 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 21:23:13.523752 master-0 kubenswrapper[38936]: I0216 21:23:13.523700 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:13.541151 master-0 kubenswrapper[38936]: I0216 21:23:13.541082 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-jswsr" Feb 16 21:23:13.561028 master-0 kubenswrapper[38936]: I0216 21:23:13.560983 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-zlh9q" Feb 16 21:23:13.580734 master-0 kubenswrapper[38936]: I0216 21:23:13.580693 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 21:23:13.584699 master-0 kubenswrapper[38936]: I0216 21:23:13.584628 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:13.601041 master-0 kubenswrapper[38936]: I0216 21:23:13.600795 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 21:23:13.606272 master-0 kubenswrapper[38936]: I0216 21:23:13.606212 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/230d9624-2d9d-4036-967b-b530347f05d5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:13.621204 master-0 kubenswrapper[38936]: I0216 21:23:13.621128 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 21:23:13.622522 master-0 kubenswrapper[38936]: I0216 21:23:13.622471 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230d9624-2d9d-4036-967b-b530347f05d5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:13.640331 master-0 kubenswrapper[38936]: I0216 21:23:13.640274 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:23:13.667277 master-0 kubenswrapper[38936]: I0216 21:23:13.667218 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 21:23:13.673142 master-0 kubenswrapper[38936]: I0216 21:23:13.673072 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:13.681898 master-0 kubenswrapper[38936]: I0216 21:23:13.681825 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 21:23:13.701456 master-0 kubenswrapper[38936]: I0216 21:23:13.701396 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:23:13.704191 master-0 kubenswrapper[38936]: I0216 21:23:13.704124 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:13.721068 master-0 kubenswrapper[38936]: I0216 21:23:13.721012 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:23:13.732294 master-0 kubenswrapper[38936]: I0216 21:23:13.732149 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:13.740809 master-0 kubenswrapper[38936]: I0216 21:23:13.740751 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:23:13.761542 master-0 kubenswrapper[38936]: I0216 21:23:13.761457 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:23:13.782757 master-0 kubenswrapper[38936]: I0216 21:23:13.781191 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:23:13.788133 master-0 kubenswrapper[38936]: I0216 21:23:13.788077 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:13.801246 master-0 kubenswrapper[38936]: I0216 21:23:13.801181 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-vvh6n" Feb 16 21:23:13.821633 master-0 kubenswrapper[38936]: I0216 21:23:13.821542 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 21:23:13.824398 master-0 kubenswrapper[38936]: I0216 21:23:13.824343 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:13.841182 master-0 kubenswrapper[38936]: I0216 21:23:13.841141 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-5tbmx" Feb 16 21:23:13.860779 master-0 kubenswrapper[38936]: I0216 21:23:13.860723 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 21:23:13.866571 master-0 kubenswrapper[38936]: I0216 21:23:13.866534 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:13.882023 master-0 kubenswrapper[38936]: I0216 21:23:13.881922 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 21:23:13.886528 master-0 kubenswrapper[38936]: I0216 21:23:13.886494 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:23:13.903021 master-0 kubenswrapper[38936]: I0216 21:23:13.902959 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 21:23:13.907086 master-0 kubenswrapper[38936]: I0216 21:23:13.907029 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:13.921684 master-0 kubenswrapper[38936]: I0216 21:23:13.921611 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 21:23:13.926577 master-0 kubenswrapper[38936]: I0216 21:23:13.926517 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f7b30888-5994-4968-9db6-9533ac60c92e-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:23:13.941364 master-0 kubenswrapper[38936]: I0216 21:23:13.941281 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-2mlkm" Feb 16 21:23:13.961204 master-0 kubenswrapper[38936]: I0216 21:23:13.961114 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-lbttq" Feb 16 21:23:13.963503 master-0 kubenswrapper[38936]: E0216 21:23:13.963435 38936 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.963687 master-0 kubenswrapper[38936]: E0216 21:23:13.963591 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls podName:7d6eb694-9a3d-49d1-bbc1-74ba4450d673 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:14.963551203 +0000 UTC m=+25.315554605 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls") pod "node-exporter-ctvb2" (UID: "7d6eb694-9a3d-49d1-bbc1-74ba4450d673") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.968550 master-0 kubenswrapper[38936]: E0216 21:23:13.968482 38936 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:13.968748 master-0 kubenswrapper[38936]: E0216 21:23:13.968578 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles podName:4a9f4f96-ca31-4959-93fe-c094caf8e077 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:14.968558056 +0000 UTC m=+25.320561448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles") pod "metrics-server-76c9c896c-pz2bk" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:13.970792 master-0 kubenswrapper[38936]: E0216 21:23:13.970695 38936 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.970792 master-0 kubenswrapper[38936]: E0216 21:23:13.970785 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs podName:8d56b871-a53a-4928-8967-a33ea9dcec2a nodeName:}" failed. No retries permitted until 2026-02-16 21:23:14.970764146 +0000 UTC m=+25.322767538 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs") pod "multus-admission-controller-6d678b8d67-shtrw" (UID: "8d56b871-a53a-4928-8967-a33ea9dcec2a") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.972943 master-0 kubenswrapper[38936]: E0216 21:23:13.972878 38936 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.973068 master-0 kubenswrapper[38936]: E0216 21:23:13.972973 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls podName:4a9f4f96-ca31-4959-93fe-c094caf8e077 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:14.972952764 +0000 UTC m=+25.324956136 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls") pod "metrics-server-76c9c896c-pz2bk" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.973068 master-0 kubenswrapper[38936]: E0216 21:23:13.972982 38936 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.973068 master-0 kubenswrapper[38936]: E0216 21:23:13.973023 38936 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.973068 master-0 kubenswrapper[38936]: E0216 21:23:13.973063 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs podName:4a9f4f96-ca31-4959-93fe-c094caf8e077 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:14.973053817 +0000 UTC m=+25.325057189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs") pod "metrics-server-76c9c896c-pz2bk" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.973707 master-0 kubenswrapper[38936]: E0216 21:23:13.973145 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config podName:7d6eb694-9a3d-49d1-bbc1-74ba4450d673 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:14.973105759 +0000 UTC m=+25.325109151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config") pod "node-exporter-ctvb2" (UID: "7d6eb694-9a3d-49d1-bbc1-74ba4450d673") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.974056 master-0 kubenswrapper[38936]: E0216 21:23:13.973999 38936 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-6thqgv1l637aa: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.974056 master-0 kubenswrapper[38936]: E0216 21:23:13.974017 38936 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:13.974247 master-0 kubenswrapper[38936]: E0216 21:23:13.974096 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle podName:4a9f4f96-ca31-4959-93fe-c094caf8e077 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:14.974077484 +0000 UTC m=+25.326080886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle") pod "metrics-server-76c9c896c-pz2bk" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:13.974247 master-0 kubenswrapper[38936]: E0216 21:23:13.974146 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle podName:4a9f4f96-ca31-4959-93fe-c094caf8e077 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:14.974117975 +0000 UTC m=+25.326121517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle") pod "metrics-server-76c9c896c-pz2bk" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:13.980792 master-0 kubenswrapper[38936]: I0216 21:23:13.980743 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 21:23:14.000634 master-0 kubenswrapper[38936]: I0216 21:23:14.000576 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 21:23:14.023223 master-0 kubenswrapper[38936]: I0216 21:23:14.023118 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 21:23:14.041702 master-0 kubenswrapper[38936]: I0216 21:23:14.041566 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 21:23:14.061762 master-0 kubenswrapper[38936]: I0216 21:23:14.061699 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-4brnj" Feb 16 21:23:14.075350 master-0 kubenswrapper[38936]: E0216 21:23:14.075272 38936 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:14.075461 master-0 kubenswrapper[38936]: E0216 21:23:14.075426 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0186fdbf-d367-4bc6-816a-bda2816b599e-serving-cert podName:0186fdbf-d367-4bc6-816a-bda2816b599e nodeName:}" failed. No retries permitted until 2026-02-16 21:23:15.075391163 +0000 UTC m=+25.427394555 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0186fdbf-d367-4bc6-816a-bda2816b599e-serving-cert") pod "console-operator-7777d5cc66-fgr2n" (UID: "0186fdbf-d367-4bc6-816a-bda2816b599e") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:14.075461 master-0 kubenswrapper[38936]: E0216 21:23:14.075276 38936 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:14.075560 master-0 kubenswrapper[38936]: E0216 21:23:14.075528 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-trusted-ca podName:0186fdbf-d367-4bc6-816a-bda2816b599e nodeName:}" failed. No retries permitted until 2026-02-16 21:23:15.075504486 +0000 UTC m=+25.427507878 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-trusted-ca") pod "console-operator-7777d5cc66-fgr2n" (UID: "0186fdbf-d367-4bc6-816a-bda2816b599e") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:14.075617 master-0 kubenswrapper[38936]: E0216 21:23:14.075588 38936 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:14.075781 master-0 kubenswrapper[38936]: E0216 21:23:14.075725 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-config podName:0186fdbf-d367-4bc6-816a-bda2816b599e nodeName:}" failed. No retries permitted until 2026-02-16 21:23:15.075632469 +0000 UTC m=+25.427636071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-config") pod "console-operator-7777d5cc66-fgr2n" (UID: "0186fdbf-d367-4bc6-816a-bda2816b599e") : failed to sync configmap cache: timed out waiting for the condition Feb 16 21:23:14.076547 master-0 kubenswrapper[38936]: E0216 21:23:14.076481 38936 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:14.076607 master-0 kubenswrapper[38936]: E0216 21:23:14.076577 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/277c6354-bff9-407b-ad97-5fdfc7f43116-monitoring-plugin-cert podName:277c6354-bff9-407b-ad97-5fdfc7f43116 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:15.076559153 +0000 UTC m=+25.428562545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/277c6354-bff9-407b-ad97-5fdfc7f43116-monitoring-plugin-cert") pod "monitoring-plugin-749f8d8bbd-z9ndp" (UID: "277c6354-bff9-407b-ad97-5fdfc7f43116") : failed to sync secret cache: timed out waiting for the condition Feb 16 21:23:14.082171 master-0 kubenswrapper[38936]: I0216 21:23:14.082136 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-rmw54" Feb 16 21:23:14.101644 master-0 kubenswrapper[38936]: I0216 21:23:14.101566 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-6thqgv1l637aa" Feb 16 21:23:14.121507 master-0 kubenswrapper[38936]: I0216 21:23:14.121434 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 21:23:14.143918 master-0 kubenswrapper[38936]: I0216 21:23:14.143866 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 21:23:14.161262 master-0 kubenswrapper[38936]: I0216 21:23:14.161179 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 21:23:14.181254 master-0 kubenswrapper[38936]: I0216 21:23:14.181208 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 21:23:14.202350 master-0 kubenswrapper[38936]: I0216 21:23:14.202303 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 21:23:14.240817 master-0 kubenswrapper[38936]: I0216 21:23:14.239461 38936 request.go:700] Waited for 2.978925398s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 16 21:23:14.243372 master-0 kubenswrapper[38936]: I0216 21:23:14.243307 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 21:23:14.244746 master-0 kubenswrapper[38936]: I0216 21:23:14.243804 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 21:23:14.263610 master-0 kubenswrapper[38936]: I0216 21:23:14.263323 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 21:23:14.281523 master-0 kubenswrapper[38936]: I0216 21:23:14.281444 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-sg7xc" Feb 16 21:23:14.301973 master-0 kubenswrapper[38936]: I0216 21:23:14.301886 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 21:23:14.344876 master-0 kubenswrapper[38936]: I0216 21:23:14.344768 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjsnz\" (UniqueName: \"kubernetes.io/projected/27c20f63-9bfb-4703-94d5-0c65475e08d1-kube-api-access-hjsnz\") pod \"authentication-operator-755d954778-8gnq5\" (UID: \"27c20f63-9bfb-4703-94d5-0c65475e08d1\") " pod="openshift-authentication-operator/authentication-operator-755d954778-8gnq5" Feb 16 21:23:14.358925 master-0 kubenswrapper[38936]: I0216 21:23:14.358721 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmvtk\" (UniqueName: \"kubernetes.io/projected/b27e0202-8bdb-4a36-8c3e-0c203f7665b8-kube-api-access-zmvtk\") pod \"multus-65zz6\" (UID: \"b27e0202-8bdb-4a36-8c3e-0c203f7665b8\") " pod="openshift-multus/multus-65zz6" Feb 16 21:23:14.379830 master-0 kubenswrapper[38936]: I0216 21:23:14.379691 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 21:23:14.396775 master-0 kubenswrapper[38936]: I0216 21:23:14.396127 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf4qg\" (UniqueName: \"kubernetes.io/projected/bd49e653-3b42-4950-8f5f-2b2ecb683678-kube-api-access-kf4qg\") pod \"apiserver-64f7f8746f-xj7z6\" (UID: \"bd49e653-3b42-4950-8f5f-2b2ecb683678\") " pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:14.415185 master-0 kubenswrapper[38936]: I0216 21:23:14.415135 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqm46\" (UniqueName: \"kubernetes.io/projected/69785167-b4ae-415b-bdcb-029f62effe78-kube-api-access-dqm46\") pod \"ovnkube-node-z8h4n\" (UID: \"69785167-b4ae-415b-bdcb-029f62effe78\") " pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:14.432570 master-0 kubenswrapper[38936]: I0216 21:23:14.432512 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxcg6\" (UniqueName: \"kubernetes.io/projected/302156cc-9dca-4a66-9e6a-ba2c7e738c92-kube-api-access-zxcg6\") pod \"control-plane-machine-set-operator-d8bf84b88-8pqbl\" (UID: \"302156cc-9dca-4a66-9e6a-ba2c7e738c92\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl" Feb 16 21:23:14.453380 master-0 kubenswrapper[38936]: I0216 21:23:14.453326 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7adbe32-b8b9-438e-a2e3-f93146a97424-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-xzww8\" (UID: \"e7adbe32-b8b9-438e-a2e3-f93146a97424\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8" Feb 16 21:23:14.473908 master-0 kubenswrapper[38936]: I0216 21:23:14.473852 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw9lp\" (UniqueName: \"kubernetes.io/projected/4085413c-9af1-4d2a-ba0f-33b42025cb7f-kube-api-access-dw9lp\") pod \"csi-snapshot-controller-operator-7b87b97578-v7xdv\" (UID: \"4085413c-9af1-4d2a-ba0f-33b42025cb7f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv" Feb 16 21:23:14.491632 master-0 kubenswrapper[38936]: I0216 21:23:14.491555 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4rg\" (UniqueName: \"kubernetes.io/projected/70d217a9-86b7-47b9-a7da-9ac920b9c7c2-kube-api-access-ll4rg\") pod \"etcd-operator-67bf55ccdd-8cllz\" (UID: \"70d217a9-86b7-47b9-a7da-9ac920b9c7c2\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz" Feb 16 21:23:14.511303 master-0 kubenswrapper[38936]: I0216 21:23:14.511230 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bcmr\" (UniqueName: \"kubernetes.io/projected/695549c8-d1fc-429d-9c9f-0a5915dc6074-kube-api-access-7bcmr\") pod \"openshift-controller-manager-operator-5f5f84757d-k42w9\" (UID: \"695549c8-d1fc-429d-9c9f-0a5915dc6074\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9" Feb 16 21:23:14.535781 master-0 kubenswrapper[38936]: I0216 21:23:14.535515 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wrr\" (UniqueName: \"kubernetes.io/projected/456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd-kube-api-access-p7wrr\") pod \"dns-operator-86b8869b79-cdltb\" (UID: \"456d6155-7e1c-48d5-a3b3-4ec3bac6cdcd\") " pod="openshift-dns-operator/dns-operator-86b8869b79-cdltb" Feb 16 21:23:14.557496 master-0 kubenswrapper[38936]: I0216 21:23:14.557437 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x85fb\" (UniqueName: \"kubernetes.io/projected/88f19cea-60ed-4977-a906-75deec51fc3d-kube-api-access-x85fb\") pod \"network-node-identity-tpj6f\" (UID: \"88f19cea-60ed-4977-a906-75deec51fc3d\") " pod="openshift-network-node-identity/network-node-identity-tpj6f" Feb 16 21:23:14.576297 master-0 kubenswrapper[38936]: I0216 21:23:14.576245 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzm2t\" (UniqueName: \"kubernetes.io/projected/34743ce3-5eda-4c60-99cb-640dd067ebdf-kube-api-access-vzm2t\") pod \"node-resolver-zfldn\" (UID: \"34743ce3-5eda-4c60-99cb-640dd067ebdf\") " pod="openshift-dns/node-resolver-zfldn" Feb 16 21:23:14.591797 master-0 kubenswrapper[38936]: I0216 21:23:14.591749 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9vmp\" (UniqueName: \"kubernetes.io/projected/9e0227bc-63f5-48be-95dc-1323a2b2e327-kube-api-access-z9vmp\") pod \"cluster-image-registry-operator-96c8c64b8-4gczb\" (UID: \"9e0227bc-63f5-48be-95dc-1323a2b2e327\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb" Feb 16 21:23:14.618778 master-0 kubenswrapper[38936]: I0216 21:23:14.618719 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxtft\" (UniqueName: \"kubernetes.io/projected/1a986ba3-2aea-4133-a05b-f69d4e0d8d3b-kube-api-access-vxtft\") pod \"operator-controller-controller-manager-85c9b89969-qzs2g\" (UID: \"1a986ba3-2aea-4133-a05b-f69d4e0d8d3b\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:14.630727 master-0 kubenswrapper[38936]: I0216 21:23:14.630676 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdx88\" (UniqueName: \"kubernetes.io/projected/17aaf0e1-e9c7-486c-83fc-47d71f5e1f64-kube-api-access-cdx88\") pod \"tuned-llsw4\" (UID: \"17aaf0e1-e9c7-486c-83fc-47d71f5e1f64\") " pod="openshift-cluster-node-tuning-operator/tuned-llsw4" Feb 16 21:23:14.653396 master-0 kubenswrapper[38936]: I0216 21:23:14.653333 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64qvl\" (UniqueName: \"kubernetes.io/projected/2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf-kube-api-access-64qvl\") pod \"dns-default-7bbrn\" (UID: \"2dcfb4b8-1d96-4597-8e76-5c0c3a47c4cf\") " pod="openshift-dns/dns-default-7bbrn" Feb 16 21:23:14.675687 master-0 kubenswrapper[38936]: I0216 21:23:14.675616 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59kpw\" (UniqueName: \"kubernetes.io/projected/1d453639-52ed-4a14-a2ee-02cf9acc2f7c-kube-api-access-59kpw\") pod \"network-metrics-daemon-42bw7\" (UID: \"1d453639-52ed-4a14-a2ee-02cf9acc2f7c\") " pod="openshift-multus/network-metrics-daemon-42bw7" Feb 16 21:23:14.698004 master-0 kubenswrapper[38936]: I0216 21:23:14.697940 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5d4ac48-aed3-46b9-9b2a-d741121e05b4-kube-api-access\") pod \"cluster-version-operator-649c4f5445-n994s\" (UID: \"a5d4ac48-aed3-46b9-9b2a-d741121e05b4\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-n994s" Feb 16 21:23:14.715118 master-0 kubenswrapper[38936]: I0216 21:23:14.715062 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67qzh\" (UniqueName: \"kubernetes.io/projected/b28234d1-1d9a-4d9f-9ad1-e3c682bed492-kube-api-access-67qzh\") pod \"marketplace-operator-6cc5b65c6b-6rmhq\" (UID: \"b28234d1-1d9a-4d9f-9ad1-e3c682bed492\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:23:14.738127 master-0 kubenswrapper[38936]: I0216 21:23:14.738067 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv45g\" (UniqueName: \"kubernetes.io/projected/99ab949e-bd0d-45a7-95d1-8381d9f1f5f3-kube-api-access-hv45g\") pod \"service-ca-676cd8b9b5-cbj2r\" (UID: \"99ab949e-bd0d-45a7-95d1-8381d9f1f5f3\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cbj2r" Feb 16 21:23:14.757305 master-0 kubenswrapper[38936]: I0216 21:23:14.757228 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxvhm\" (UniqueName: \"kubernetes.io/projected/e8194cdc-3133-49e2-9579-a747c0bf2b16-kube-api-access-hxvhm\") pod \"catalogd-controller-manager-67bc7c997f-8kdgg\" (UID: \"e8194cdc-3133-49e2-9579-a747c0bf2b16\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:14.777957 master-0 kubenswrapper[38936]: I0216 21:23:14.777870 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkz65\" (UniqueName: \"kubernetes.io/projected/6b6be6de-6fcc-4f57-b163-fe8f970a01a4-kube-api-access-mkz65\") pod \"openshift-apiserver-operator-6d4655d9cf-tvzdw\" (UID: \"6b6be6de-6fcc-4f57-b163-fe8f970a01a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw" Feb 16 21:23:14.797076 master-0 kubenswrapper[38936]: I0216 21:23:14.796971 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pw88\" (UniqueName: \"kubernetes.io/projected/2ab0a907-7abe-4808-ba21-bdda1506eae2-kube-api-access-9pw88\") pod \"service-ca-operator-5dc4688546-q5vjl\" (UID: \"2ab0a907-7abe-4808-ba21-bdda1506eae2\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl" Feb 16 21:23:14.818688 master-0 kubenswrapper[38936]: I0216 21:23:14.818615 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrc7l\" (UniqueName: \"kubernetes.io/projected/2e618c5c-52be-4b52-b426-b92555dee9de-kube-api-access-nrc7l\") pod \"catalog-operator-588944557d-h7xl6\" (UID: \"2e618c5c-52be-4b52-b426-b92555dee9de\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 21:23:14.836720 master-0 kubenswrapper[38936]: I0216 21:23:14.836630 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-7p9ft\" (UID: \"7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft" Feb 16 21:23:14.858947 master-0 kubenswrapper[38936]: I0216 21:23:14.858875 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7nmb\" (UniqueName: \"kubernetes.io/projected/4b035e85-b2b0-4dee-bb86-3465fc4b98a8-kube-api-access-g7nmb\") pod \"package-server-manager-5c696dbdcd-9m94g\" (UID: \"4b035e85-b2b0-4dee-bb86-3465fc4b98a8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:23:14.874064 master-0 kubenswrapper[38936]: I0216 21:23:14.874003 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7pk6\" (UniqueName: \"kubernetes.io/projected/1b61063e-775e-421d-bf73-a6ef134293a0-kube-api-access-x7pk6\") pod \"network-operator-6fcf4c966-n4hfs\" (UID: \"1b61063e-775e-421d-bf73-a6ef134293a0\") " pod="openshift-network-operator/network-operator-6fcf4c966-n4hfs" Feb 16 21:23:14.894039 master-0 kubenswrapper[38936]: I0216 21:23:14.893942 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxhfs\" (UniqueName: \"kubernetes.io/projected/3403d2bf-b093-4f2e-80aa-73a3d6bcaffb-kube-api-access-gxhfs\") pod \"network-check-source-7d8f4c8c66-w6tqw\" (UID: \"3403d2bf-b093-4f2e-80aa-73a3d6bcaffb\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw" Feb 16 21:23:14.917681 master-0 kubenswrapper[38936]: I0216 21:23:14.917594 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkdzb\" (UniqueName: \"kubernetes.io/projected/d9d71a7a-a751-4de4-9c76-9bac85fe0177-kube-api-access-jkdzb\") pod \"iptables-alerter-b68cj\" (UID: \"d9d71a7a-a751-4de4-9c76-9bac85fe0177\") " pod="openshift-network-operator/iptables-alerter-b68cj" Feb 16 21:23:14.950885 master-0 kubenswrapper[38936]: I0216 21:23:14.950825 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sd27\" (UniqueName: \"kubernetes.io/projected/a4c9b781-14c0-469c-bb9e-0c3982a04520-kube-api-access-8sd27\") pod \"olm-operator-6b56bd877c-vlhvq\" (UID: \"a4c9b781-14c0-469c-bb9e-0c3982a04520\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 21:23:14.959762 master-0 kubenswrapper[38936]: I0216 21:23:14.959717 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx2kd\" (UniqueName: \"kubernetes.io/projected/c7333319-3fe6-4b3f-b600-6b6df49fcaff-kube-api-access-qx2kd\") pod \"kube-storage-version-migrator-operator-cd5474998-56v4p\" (UID: \"c7333319-3fe6-4b3f-b600-6b6df49fcaff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p" Feb 16 21:23:14.975220 master-0 kubenswrapper[38936]: I0216 21:23:14.975168 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx4tz\" (UniqueName: \"kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz\") pod \"multus-admission-controller-7c64d55f8-z46jt\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" Feb 16 21:23:14.995666 master-0 kubenswrapper[38936]: I0216 21:23:14.995577 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vklwz\" (UniqueName: \"kubernetes.io/projected/59237aa6-6250-4619-8ee5-abae59f04b57-kube-api-access-vklwz\") pod \"openshift-config-operator-7c6bdb986f-xbd96\" (UID: \"59237aa6-6250-4619-8ee5-abae59f04b57\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:23:15.015926 master-0 kubenswrapper[38936]: I0216 21:23:15.015852 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9w8k\" (UniqueName: \"kubernetes.io/projected/684a8167-6c5b-430f-979e-307e58487611-kube-api-access-s9w8k\") pod \"migrator-5bd989df77-kdb9d\" (UID: \"684a8167-6c5b-430f-979e-307e58487611\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d" Feb 16 21:23:15.030216 master-0 kubenswrapper[38936]: I0216 21:23:15.030168 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx4tz\" (UniqueName: \"kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz\") pod \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\" (UID: \"b27de289-c0f9-47ff-aac6-15b7bc1b178a\") " Feb 16 21:23:15.030505 master-0 kubenswrapper[38936]: I0216 21:23:15.030474 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.030835 master-0 kubenswrapper[38936]: I0216 21:23:15.030773 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.031239 master-0 kubenswrapper[38936]: I0216 21:23:15.031196 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.031378 master-0 kubenswrapper[38936]: I0216 21:23:15.031203 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.031562 master-0 kubenswrapper[38936]: I0216 21:23:15.031531 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:15.031760 master-0 kubenswrapper[38936]: I0216 21:23:15.031727 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.031817 master-0 kubenswrapper[38936]: I0216 21:23:15.031797 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:23:15.031931 master-0 kubenswrapper[38936]: I0216 21:23:15.031900 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-tls\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:15.032112 master-0 kubenswrapper[38936]: I0216 21:23:15.032069 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:15.032228 master-0 kubenswrapper[38936]: I0216 21:23:15.032142 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.032228 master-0 kubenswrapper[38936]: I0216 21:23:15.032174 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.032507 master-0 kubenswrapper[38936]: I0216 21:23:15.032454 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.032545 master-0 kubenswrapper[38936]: I0216 21:23:15.032519 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.032545 master-0 kubenswrapper[38936]: I0216 21:23:15.032529 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:15.032601 master-0 kubenswrapper[38936]: I0216 21:23:15.032559 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d56b871-a53a-4928-8967-a33ea9dcec2a-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:23:15.033290 master-0 kubenswrapper[38936]: I0216 21:23:15.033204 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.038734 master-0 kubenswrapper[38936]: I0216 21:23:15.038389 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz" (OuterVolumeSpecName: "kube-api-access-fx4tz") pod "b27de289-c0f9-47ff-aac6-15b7bc1b178a" (UID: "b27de289-c0f9-47ff-aac6-15b7bc1b178a"). InnerVolumeSpecName "kube-api-access-fx4tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:23:15.046316 master-0 kubenswrapper[38936]: I0216 21:23:15.046252 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcp5t\" (UniqueName: \"kubernetes.io/projected/0d903d23-8e0b-424b-bcd0-e0a00f306e49-kube-api-access-kcp5t\") pod \"network-check-target-68c25\" (UID: \"0d903d23-8e0b-424b-bcd0-e0a00f306e49\") " pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 21:23:15.062012 master-0 kubenswrapper[38936]: I0216 21:23:15.061922 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b02b740-5698-4e9a-90fe-2873bd0b0958-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-cl5ld\" (UID: \"0b02b740-5698-4e9a-90fe-2873bd0b0958\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld" Feb 16 21:23:15.079278 master-0 kubenswrapper[38936]: I0216 21:23:15.079223 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sq4t\" (UniqueName: \"kubernetes.io/projected/62935559-041f-4694-9d36-adc809d079b4-kube-api-access-6sq4t\") pod \"multus-additional-cni-plugins-8zsx4\" (UID: \"62935559-041f-4694-9d36-adc809d079b4\") " pod="openshift-multus/multus-additional-cni-plugins-8zsx4" Feb 16 21:23:15.096638 master-0 kubenswrapper[38936]: I0216 21:23:15.096555 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jt7h\" (UniqueName: \"kubernetes.io/projected/ec7dd4ea-a139-45d4-96a4-506da1567292-kube-api-access-9jt7h\") pod \"cluster-monitoring-operator-756d64c8c4-w57zn\" (UID: \"ec7dd4ea-a139-45d4-96a4-506da1567292\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn" Feb 16 21:23:15.113950 master-0 kubenswrapper[38936]: I0216 21:23:15.113863 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4djt\" (UniqueName: \"kubernetes.io/projected/d2501eec-47c8-47bc-b0c9-28d94c06075b-kube-api-access-x4djt\") pod \"apiserver-6bdb76b9b7-z46x6\" (UID: \"d2501eec-47c8-47bc-b0c9-28d94c06075b\") " pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:15.133602 master-0 kubenswrapper[38936]: I0216 21:23:15.133513 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/277c6354-bff9-407b-ad97-5fdfc7f43116-monitoring-plugin-cert\") pod \"monitoring-plugin-749f8d8bbd-z9ndp\" (UID: \"277c6354-bff9-407b-ad97-5fdfc7f43116\") " pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" Feb 16 21:23:15.133887 master-0 kubenswrapper[38936]: I0216 21:23:15.133847 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-trusted-ca\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:15.134152 master-0 kubenswrapper[38936]: I0216 21:23:15.134109 38936 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 21:23:15.134313 master-0 kubenswrapper[38936]: I0216 21:23:15.134270 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0186fdbf-d367-4bc6-816a-bda2816b599e-serving-cert\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:15.134475 master-0 kubenswrapper[38936]: I0216 21:23:15.134439 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-config\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:15.135410 master-0 kubenswrapper[38936]: I0216 21:23:15.135330 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-config\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:15.135516 master-0 kubenswrapper[38936]: I0216 21:23:15.135413 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0186fdbf-d367-4bc6-816a-bda2816b599e-trusted-ca\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:15.135669 master-0 kubenswrapper[38936]: I0216 21:23:15.135602 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx4tz\" (UniqueName: \"kubernetes.io/projected/b27de289-c0f9-47ff-aac6-15b7bc1b178a-kube-api-access-fx4tz\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:15.135826 master-0 kubenswrapper[38936]: I0216 21:23:15.135789 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-bound-sa-token\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 21:23:15.136987 master-0 kubenswrapper[38936]: I0216 21:23:15.136953 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/277c6354-bff9-407b-ad97-5fdfc7f43116-monitoring-plugin-cert\") pod \"monitoring-plugin-749f8d8bbd-z9ndp\" (UID: \"277c6354-bff9-407b-ad97-5fdfc7f43116\") " pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" Feb 16 21:23:15.139435 master-0 kubenswrapper[38936]: I0216 21:23:15.139396 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0186fdbf-d367-4bc6-816a-bda2816b599e-serving-cert\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:15.154400 master-0 kubenswrapper[38936]: I0216 21:23:15.154351 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tklr\" (UniqueName: \"kubernetes.io/projected/cef33294-81fb-41a2-811d-2565f94514d1-kube-api-access-5tklr\") pod \"ingress-operator-c588d8cb4-6ps2d\" (UID: \"cef33294-81fb-41a2-811d-2565f94514d1\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" Feb 16 21:23:15.185018 master-0 kubenswrapper[38936]: I0216 21:23:15.184961 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzx4s\" (UniqueName: \"kubernetes.io/projected/b1ac9776-54c4-46ce-b898-01c8cf35e593-kube-api-access-vzx4s\") pod \"csi-snapshot-controller-74b6595c6d-pc6x9\" (UID: \"b1ac9776-54c4-46ce-b898-01c8cf35e593\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9" Feb 16 21:23:15.204841 master-0 kubenswrapper[38936]: I0216 21:23:15.204782 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xgcn\" (UniqueName: \"kubernetes.io/projected/c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee-kube-api-access-7xgcn\") pod \"router-default-864ddd5f56-z4bnk\" (UID: \"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee\") " pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:15.227914 master-0 kubenswrapper[38936]: I0216 21:23:15.227837 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6wng\" (UniqueName: \"kubernetes.io/projected/484154d0-66c8-4d0e-bf1b-f48d0abfe628-kube-api-access-b6wng\") pod \"ovnkube-control-plane-bb7ffbb8d-xlkvd\" (UID: \"484154d0-66c8-4d0e-bf1b-f48d0abfe628\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd" Feb 16 21:23:15.239757 master-0 kubenswrapper[38936]: I0216 21:23:15.239695 38936 request.go:700] Waited for 3.940746176s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/cluster-node-tuning-operator/token Feb 16 21:23:15.246551 master-0 kubenswrapper[38936]: I0216 21:23:15.246495 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfmv6\" (UniqueName: \"kubernetes.io/projected/5e062e07-8076-444c-b476-4eb2848e9613-kube-api-access-dfmv6\") pod \"cluster-olm-operator-55b69c6c48-pdjn4\" (UID: \"5e062e07-8076-444c-b476-4eb2848e9613\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4" Feb 16 21:23:15.264825 master-0 kubenswrapper[38936]: I0216 21:23:15.264740 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qd6r\" (UniqueName: \"kubernetes.io/projected/2506c282-0b37-4ece-8a0c-885d0b7f7901-kube-api-access-6qd6r\") pod \"cluster-node-tuning-operator-ff6c9b66-kh4d4\" (UID: \"2506c282-0b37-4ece-8a0c-885d0b7f7901\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4" Feb 16 21:23:15.270480 master-0 kubenswrapper[38936]: I0216 21:23:15.270435 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" Feb 16 21:23:15.303319 master-0 kubenswrapper[38936]: I0216 21:23:15.303174 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldzxc\" (UniqueName: \"kubernetes.io/projected/03a5021d-8a5c-4011-a9f9-c5eb38d5f236-kube-api-access-ldzxc\") pod \"cloud-credential-operator-595c8f9ff-7mpsf\" (UID: \"03a5021d-8a5c-4011-a9f9-c5eb38d5f236\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf" Feb 16 21:23:15.321226 master-0 kubenswrapper[38936]: I0216 21:23:15.321167 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx8bf\" (UniqueName: \"kubernetes.io/projected/aa2e9bbc-3962-45f5-a7cc-2dc059409e70-kube-api-access-wx8bf\") pod \"cluster-storage-operator-75b869db96-g4w5m\" (UID: \"aa2e9bbc-3962-45f5-a7cc-2dc059409e70\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m" Feb 16 21:23:15.343697 master-0 kubenswrapper[38936]: I0216 21:23:15.343615 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22pl9\" (UniqueName: \"kubernetes.io/projected/8d56b871-a53a-4928-8967-a33ea9dcec2a-kube-api-access-22pl9\") pod \"multus-admission-controller-6d678b8d67-shtrw\" (UID: \"8d56b871-a53a-4928-8967-a33ea9dcec2a\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-shtrw" Feb 16 21:23:15.351837 master-0 kubenswrapper[38936]: I0216 21:23:15.351785 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqkvs\" (UniqueName: \"kubernetes.io/projected/230d9624-2d9d-4036-967b-b530347f05d5-kube-api-access-vqkvs\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn\" (UID: \"230d9624-2d9d-4036-967b-b530347f05d5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn" Feb 16 21:23:15.378960 master-0 kubenswrapper[38936]: I0216 21:23:15.378818 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgvx2\" (UniqueName: \"kubernetes.io/projected/913951bb-1702-4b71-862c-a166bc7a62e0-kube-api-access-pgvx2\") pod \"machine-config-server-qvctv\" (UID: \"913951bb-1702-4b71-862c-a166bc7a62e0\") " pod="openshift-machine-config-operator/machine-config-server-qvctv" Feb 16 21:23:15.390933 master-0 kubenswrapper[38936]: I0216 21:23:15.390881 38936 scope.go:117] "RemoveContainer" containerID="cddc9c1d447dc5a0250ef24bddae48c93c58b480b6bca11a2ff7438d4148bf8f" Feb 16 21:23:15.395173 master-0 kubenswrapper[38936]: I0216 21:23:15.395135 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq9c5\" (UniqueName: \"kubernetes.io/projected/d8d648c7-b84b-4f43-84c9-903aead0891a-kube-api-access-nq9c5\") pod \"redhat-operators-69wj8\" (UID: \"d8d648c7-b84b-4f43-84c9-903aead0891a\") " pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:23:15.411641 master-0 kubenswrapper[38936]: I0216 21:23:15.411592 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trcfg\" (UniqueName: \"kubernetes.io/projected/065fcd43-1572-4152-b77b-a6b7ab52a081-kube-api-access-trcfg\") pod \"machine-approver-8569dd85ff-kvhs4\" (UID: \"065fcd43-1572-4152-b77b-a6b7ab52a081\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4" Feb 16 21:23:15.419592 master-0 kubenswrapper[38936]: I0216 21:23:15.419553 38936 scope.go:117] "RemoveContainer" containerID="7eb9d606c0ba4432a3c104c5bb2952f3efa3dee4e29f1c0d81a5b0db607ceac8" Feb 16 21:23:15.440106 master-0 kubenswrapper[38936]: I0216 21:23:15.440051 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgjlj\" (UniqueName: \"kubernetes.io/projected/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-kube-api-access-dgjlj\") pod \"cni-sysctl-allowlist-ds-k8h7h\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " pod="openshift-multus/cni-sysctl-allowlist-ds-k8h7h" Feb 16 21:23:15.442690 master-0 kubenswrapper[38936]: I0216 21:23:15.442635 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgjlj\" (UniqueName: \"kubernetes.io/projected/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-kube-api-access-dgjlj\") pod \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\" (UID: \"3e3ccb9a-4a5d-4a04-8334-b1e303b215a5\") " Feb 16 21:23:15.452848 master-0 kubenswrapper[38936]: I0216 21:23:15.452807 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qcq9\" (UniqueName: \"kubernetes.io/projected/88c9d2fb-763f-4405-8d1a-c39039b41d3b-kube-api-access-8qcq9\") pod \"machine-config-daemon-jb6tl\" (UID: \"88c9d2fb-763f-4405-8d1a-c39039b41d3b\") " pod="openshift-machine-config-operator/machine-config-daemon-jb6tl" Feb 16 21:23:15.463973 master-0 kubenswrapper[38936]: I0216 21:23:15.462596 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-kube-api-access-dgjlj" (OuterVolumeSpecName: "kube-api-access-dgjlj") pod "3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" (UID: "3e3ccb9a-4a5d-4a04-8334-b1e303b215a5"). InnerVolumeSpecName "kube-api-access-dgjlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:23:15.478802 master-0 kubenswrapper[38936]: I0216 21:23:15.478612 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vddxb\" (UniqueName: \"kubernetes.io/projected/0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b-kube-api-access-vddxb\") pod \"ingress-canary-l44qd\" (UID: \"0efeb0f8-6bb8-47ee-a8ac-c2380df5f55b\") " pod="openshift-ingress-canary/ingress-canary-l44qd" Feb 16 21:23:15.500206 master-0 kubenswrapper[38936]: I0216 21:23:15.492584 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vcsp\" (UniqueName: \"kubernetes.io/projected/fb1eac23-18a5-4706-adcd-81d83e04cd12-kube-api-access-8vcsp\") pod \"machine-config-controller-686c884b4d-6j2l4\" (UID: \"fb1eac23-18a5-4706-adcd-81d83e04cd12\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4" Feb 16 21:23:15.519238 master-0 kubenswrapper[38936]: I0216 21:23:15.519200 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wckst\" (UniqueName: \"kubernetes.io/projected/e9bd1f48-6d45-4045-b18e-46ce3005d51d-kube-api-access-wckst\") pod \"kube-state-metrics-7cc9598d54-n467n\" (UID: \"e9bd1f48-6d45-4045-b18e-46ce3005d51d\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-n467n" Feb 16 21:23:15.538385 master-0 kubenswrapper[38936]: I0216 21:23:15.538341 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqkgp\" (UniqueName: \"kubernetes.io/projected/853452fb-1035-4f22-8aeb-9043d150e8ca-kube-api-access-zqkgp\") pod \"certified-operators-blw8x\" (UID: \"853452fb-1035-4f22-8aeb-9043d150e8ca\") " pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:23:15.553386 master-0 kubenswrapper[38936]: I0216 21:23:15.553301 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgjlj\" (UniqueName: \"kubernetes.io/projected/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5-kube-api-access-dgjlj\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:15.557368 master-0 kubenswrapper[38936]: I0216 21:23:15.557325 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgj2q\" (UniqueName: \"kubernetes.io/projected/8b648d9e-a892-4951-b0e2-fed6b16273d4-kube-api-access-sgj2q\") pod \"cluster-baremetal-operator-7bc947fc7d-xwptz\" (UID: \"8b648d9e-a892-4951-b0e2-fed6b16273d4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz" Feb 16 21:23:15.579546 master-0 kubenswrapper[38936]: I0216 21:23:15.579502 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbfdg\" (UniqueName: \"kubernetes.io/projected/f7b30888-5994-4968-9db6-9533ac60c92e-kube-api-access-fbfdg\") pod \"openshift-state-metrics-546cc7d765-s4j9z\" (UID: \"f7b30888-5994-4968-9db6-9533ac60c92e\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z" Feb 16 21:23:15.596351 master-0 kubenswrapper[38936]: I0216 21:23:15.596293 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcwzq\" (UniqueName: \"kubernetes.io/projected/ce229d27-837d-4a98-80fc-d56877ae39b8-kube-api-access-dcwzq\") pod \"community-operators-j5kwc\" (UID: \"ce229d27-837d-4a98-80fc-d56877ae39b8\") " pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:23:15.616857 master-0 kubenswrapper[38936]: I0216 21:23:15.616797 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtrzq\" (UniqueName: \"kubernetes.io/projected/319dc882-e1f5-40f9-99f4-2bae028337e5-kube-api-access-mtrzq\") pod \"packageserver-78d4b6b677-npmx4\" (UID: \"319dc882-e1f5-40f9-99f4-2bae028337e5\") " pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:15.633488 master-0 kubenswrapper[38936]: I0216 21:23:15.633427 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98n4h\" (UniqueName: \"kubernetes.io/projected/a0b7a368-1408-4fc3-ae25-4613b74e7fca-kube-api-access-98n4h\") pod \"prometheus-operator-7485d645b8-9xc4n\" (UID: \"a0b7a368-1408-4fc3-ae25-4613b74e7fca\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-9xc4n" Feb 16 21:23:15.666670 master-0 kubenswrapper[38936]: I0216 21:23:15.666599 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrc4z\" (UniqueName: \"kubernetes.io/projected/4a9f4f96-ca31-4959-93fe-c094caf8e077-kube-api-access-xrc4z\") pod \"metrics-server-76c9c896c-pz2bk\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.678560 master-0 kubenswrapper[38936]: I0216 21:23:15.678522 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmn29\" (UniqueName: \"kubernetes.io/projected/f275e79f-923c-4d3a-8ed4-084a122ddcf4-kube-api-access-cmn29\") pod \"redhat-marketplace-sn2nh\" (UID: \"f275e79f-923c-4d3a-8ed4-084a122ddcf4\") " pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:23:15.692191 master-0 kubenswrapper[38936]: I0216 21:23:15.692143 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmhq9\" (UniqueName: \"kubernetes.io/projected/ff193060-a272-4e4e-990a-83ac410f523d-kube-api-access-wmhq9\") pod \"machine-config-operator-84976bb859-jwh5s\" (UID: \"ff193060-a272-4e4e-990a-83ac410f523d\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s" Feb 16 21:23:15.717056 master-0 kubenswrapper[38936]: I0216 21:23:15.716989 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25mkq\" (UniqueName: \"kubernetes.io/projected/1d7d0416-5f50-42bd-826b-92eecf9adcec-kube-api-access-25mkq\") pod \"cluster-autoscaler-operator-67fd9768b5-557vd\" (UID: \"1d7d0416-5f50-42bd-826b-92eecf9adcec\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd" Feb 16 21:23:15.736438 master-0 kubenswrapper[38936]: I0216 21:23:15.736388 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf2w4\" (UniqueName: \"kubernetes.io/projected/ba294358-051a-4f09-b182-710d3d6778c5-kube-api-access-qf2w4\") pod \"machine-api-operator-bd7dd5c46-27jwb\" (UID: \"ba294358-051a-4f09-b182-710d3d6778c5\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb" Feb 16 21:23:15.761773 master-0 kubenswrapper[38936]: I0216 21:23:15.756465 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npfk7\" (UniqueName: \"kubernetes.io/projected/e9615af2-cad5-4705-9c2f-6f3c97026100-kube-api-access-npfk7\") pod \"insights-operator-cb4f7b4cf-h8f7q\" (UID: \"e9615af2-cad5-4705-9c2f-6f3c97026100\") " pod="openshift-insights/insights-operator-cb4f7b4cf-h8f7q" Feb 16 21:23:15.784148 master-0 kubenswrapper[38936]: I0216 21:23:15.780795 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc47v\" (UniqueName: \"kubernetes.io/projected/1489d1b6-d8a1-453a-bff3-8adfd4335903-kube-api-access-xc47v\") pod \"route-controller-manager-85d99cfd66-kjw24\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:15.787098 master-0 kubenswrapper[38936]: I0216 21:23:15.786962 38936 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:23:15.794273 master-0 kubenswrapper[38936]: I0216 21:23:15.794215 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jh6l\" (UniqueName: \"kubernetes.io/projected/7d6eb694-9a3d-49d1-bbc1-74ba4450d673-kube-api-access-6jh6l\") pod \"node-exporter-ctvb2\" (UID: \"7d6eb694-9a3d-49d1-bbc1-74ba4450d673\") " pod="openshift-monitoring/node-exporter-ctvb2" Feb 16 21:23:15.820016 master-0 kubenswrapper[38936]: I0216 21:23:15.819886 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvw2m\" (UniqueName: \"kubernetes.io/projected/408a9364-3730-4017-b1e4-c85d6a504168-kube-api-access-lvw2m\") pod \"controller-manager-6998cd96fb-bgcb2\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:15.831130 master-0 kubenswrapper[38936]: I0216 21:23:15.831088 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnc6\" (UniqueName: \"kubernetes.io/projected/55095f4f-cac0-456c-9ccc-45869392408c-kube-api-access-7hnc6\") pod \"cluster-samples-operator-f8cbff74c-d7lfl\" (UID: \"55095f4f-cac0-456c-9ccc-45869392408c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl" Feb 16 21:23:15.856965 master-0 kubenswrapper[38936]: I0216 21:23:15.856922 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbb86\" (UniqueName: \"kubernetes.io/projected/0186fdbf-d367-4bc6-816a-bda2816b599e-kube-api-access-nbb86\") pod \"console-operator-7777d5cc66-fgr2n\" (UID: \"0186fdbf-d367-4bc6-816a-bda2816b599e\") " pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:15.873336 master-0 kubenswrapper[38936]: E0216 21:23:15.873292 38936 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:15.873336 master-0 kubenswrapper[38936]: E0216 21:23:15.873335 38936 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:15.873499 master-0 kubenswrapper[38936]: E0216 21:23:15.873402 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access podName:1f8a26db-5a90-4da9-9074-33256ef17100 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:16.373382609 +0000 UTC m=+26.725385961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access") pod "installer-1-retry-1-master-0" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:15.908819 master-0 kubenswrapper[38936]: E0216 21:23:15.908772 38936 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.035s" Feb 16 21:23:15.918390 master-0 kubenswrapper[38936]: I0216 21:23:15.918345 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 21:23:15.941613 master-0 kubenswrapper[38936]: I0216 21:23:15.941545 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-z46jt" event={"ID":"b27de289-c0f9-47ff-aac6-15b7bc1b178a","Type":"ContainerDied","Data":"7836160a631ad4fabd13fade7e117d0a195ed40a8c1f33bde283fef44ab0f21f"} Feb 16 21:23:15.941613 master-0 kubenswrapper[38936]: I0216 21:23:15.941611 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d"} Feb 16 21:23:15.941854 master-0 kubenswrapper[38936]: I0216 21:23:15.941635 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp"] Feb 16 21:23:15.941854 master-0 kubenswrapper[38936]: I0216 21:23:15.941674 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 16 21:23:15.941854 master-0 kubenswrapper[38936]: I0216 21:23:15.941689 38936 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="48e014e9-b22b-4fb1-a1eb-c3f7420740ad" Feb 16 21:23:15.941854 master-0 kubenswrapper[38936]: I0216 21:23:15.941739 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" Feb 16 21:23:15.941854 master-0 kubenswrapper[38936]: I0216 21:23:15.941733 38936 scope.go:117] "RemoveContainer" containerID="7e2db6d71a3ac7629c39a027759be84deb42e9801284908e0ecc941bc1381254" Feb 16 21:23:15.941991 master-0 kubenswrapper[38936]: I0216 21:23:15.941759 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:15.942024 master-0 kubenswrapper[38936]: I0216 21:23:15.941981 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 16 21:23:15.942024 master-0 kubenswrapper[38936]: I0216 21:23:15.942003 38936 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="48e014e9-b22b-4fb1-a1eb-c3f7420740ad" Feb 16 21:23:15.942256 master-0 kubenswrapper[38936]: I0216 21:23:15.942220 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:15.942324 master-0 kubenswrapper[38936]: I0216 21:23:15.942277 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:15.942324 master-0 kubenswrapper[38936]: I0216 21:23:15.942294 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m" Feb 16 21:23:15.942324 master-0 kubenswrapper[38936]: I0216 21:23:15.942308 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:15.942429 master-0 kubenswrapper[38936]: I0216 21:23:15.942376 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:15.942429 master-0 kubenswrapper[38936]: I0216 21:23:15.942410 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:15.942429 master-0 kubenswrapper[38936]: I0216 21:23:15.942423 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:15.942535 master-0 kubenswrapper[38936]: I0216 21:23:15.942436 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:15.942535 master-0 kubenswrapper[38936]: I0216 21:23:15.942522 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:23:15.942613 master-0 kubenswrapper[38936]: I0216 21:23:15.942594 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:23:15.942721 master-0 kubenswrapper[38936]: I0216 21:23:15.942693 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:23:15.942721 master-0 kubenswrapper[38936]: I0216 21:23:15.942714 38936 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:15.942796 master-0 kubenswrapper[38936]: I0216 21:23:15.942743 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:15.942836 master-0 kubenswrapper[38936]: I0216 21:23:15.942817 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:23:15.942887 master-0 kubenswrapper[38936]: I0216 21:23:15.942858 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:15.942928 master-0 kubenswrapper[38936]: I0216 21:23:15.942897 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.942971 master-0 kubenswrapper[38936]: I0216 21:23:15.942937 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:15.942971 master-0 kubenswrapper[38936]: I0216 21:23:15.942966 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:23:15.943047 master-0 kubenswrapper[38936]: I0216 21:23:15.942988 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq" Feb 16 21:23:15.943047 master-0 kubenswrapper[38936]: I0216 21:23:15.943008 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-7bbrn" Feb 16 21:23:15.943047 master-0 kubenswrapper[38936]: I0216 21:23:15.943039 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:15.943157 master-0 kubenswrapper[38936]: I0216 21:23:15.943057 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-7bbrn" Feb 16 21:23:15.943157 master-0 kubenswrapper[38936]: I0216 21:23:15.943076 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg" Feb 16 21:23:15.943157 master-0 kubenswrapper[38936]: I0216 21:23:15.943101 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:15.943157 master-0 kubenswrapper[38936]: I0216 21:23:15.943120 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g" Feb 16 21:23:15.943157 master-0 kubenswrapper[38936]: I0216 21:23:15.943137 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 21:23:15.943157 master-0 kubenswrapper[38936]: I0216 21:23:15.943156 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943177 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943192 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943203 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943219 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943236 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943245 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943264 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943272 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943293 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943311 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943329 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943351 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943369 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:23:15.943379 master-0 kubenswrapper[38936]: I0216 21:23:15.943386 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.946968 master-0 kubenswrapper[38936]: I0216 21:23:15.946633 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-68c25" Feb 16 21:23:15.947357 master-0 kubenswrapper[38936]: I0216 21:23:15.947337 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:15.947600 master-0 kubenswrapper[38936]: I0216 21:23:15.947575 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:15.947782 master-0 kubenswrapper[38936]: I0216 21:23:15.947751 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96" Feb 16 21:23:15.948107 master-0 kubenswrapper[38936]: I0216 21:23:15.948074 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4" Feb 16 21:23:15.958165 master-0 kubenswrapper[38936]: I0216 21:23:15.957600 38936 scope.go:117] "RemoveContainer" containerID="b6f9bd149e55332060a93dd1c773c869219679c9d52274540dd91f495e731934" Feb 16 21:23:15.989360 master-0 kubenswrapper[38936]: I0216 21:23:15.986824 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:23:16.003999 master-0 kubenswrapper[38936]: I0216 21:23:16.003963 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:23:16.005604 master-0 kubenswrapper[38936]: I0216 21:23:16.005571 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:23:16.007234 master-0 kubenswrapper[38936]: I0216 21:23:16.007203 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:23:16.050060 master-0 kubenswrapper[38936]: I0216 21:23:16.049871 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:16.050259 master-0 kubenswrapper[38936]: I0216 21:23:16.050086 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:16.054908 master-0 kubenswrapper[38936]: I0216 21:23:16.054842 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:23:16.054908 master-0 kubenswrapper[38936]: I0216 21:23:16.054890 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:23:16.139911 master-0 kubenswrapper[38936]: I0216 21:23:16.139734 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:16.394130 master-0 kubenswrapper[38936]: I0216 21:23:16.393964 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" event={"ID":"c7eddb51-cb37-4dd5-9c24-64cc4ae2e6ee","Type":"ContainerStarted","Data":"bb13197824102e4a72a770828c8c5c2808e598fcadb9ee7d085628446edcd1a5"} Feb 16 21:23:16.403475 master-0 kubenswrapper[38936]: I0216 21:23:16.403442 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-6ps2d_cef33294-81fb-41a2-811d-2565f94514d1/ingress-operator/6.log" Feb 16 21:23:16.404487 master-0 kubenswrapper[38936]: I0216 21:23:16.404042 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d" event={"ID":"cef33294-81fb-41a2-811d-2565f94514d1","Type":"ContainerStarted","Data":"cab80f73b549b6de523ba3cfcce2b418d3b6208302937bc95cbb463965b4bfd2"} Feb 16 21:23:16.410182 master-0 kubenswrapper[38936]: I0216 21:23:16.409220 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" event={"ID":"277c6354-bff9-407b-ad97-5fdfc7f43116","Type":"ContainerStarted","Data":"669819808d05d1720b5e57e0551a256429174552c6a5ca4ec3f8557fb06da794"} Feb 16 21:23:16.410182 master-0 kubenswrapper[38936]: I0216 21:23:16.409925 38936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:23:16.422450 master-0 kubenswrapper[38936]: I0216 21:23:16.421466 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:16.425472 master-0 kubenswrapper[38936]: I0216 21:23:16.425407 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:16.464439 master-0 kubenswrapper[38936]: I0216 21:23:16.464021 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:16.464439 master-0 kubenswrapper[38936]: E0216 21:23:16.464401 38936 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:16.464439 master-0 kubenswrapper[38936]: E0216 21:23:16.464443 38936 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:16.464820 master-0 kubenswrapper[38936]: E0216 21:23:16.464493 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access podName:1f8a26db-5a90-4da9-9074-33256ef17100 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:17.46447366 +0000 UTC m=+27.816477072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access") pod "installer-1-retry-1-master-0" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:16.584457 master-0 kubenswrapper[38936]: I0216 21:23:16.584208 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:16.634868 master-0 kubenswrapper[38936]: I0216 21:23:16.634823 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 16 21:23:16.635117 master-0 kubenswrapper[38936]: E0216 21:23:16.635099 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" containerName="multus-admission-controller" Feb 16 21:23:16.635246 master-0 kubenswrapper[38936]: I0216 21:23:16.635118 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" containerName="multus-admission-controller" Feb 16 21:23:16.635246 master-0 kubenswrapper[38936]: E0216 21:23:16.635127 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8a26db-5a90-4da9-9074-33256ef17100" containerName="installer" Feb 16 21:23:16.635246 master-0 kubenswrapper[38936]: I0216 21:23:16.635134 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8a26db-5a90-4da9-9074-33256ef17100" containerName="installer" Feb 16 21:23:16.635246 master-0 kubenswrapper[38936]: E0216 21:23:16.635155 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" containerName="kube-rbac-proxy" Feb 16 21:23:16.635246 master-0 kubenswrapper[38936]: I0216 21:23:16.635162 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" containerName="kube-rbac-proxy" Feb 16 21:23:16.635246 master-0 kubenswrapper[38936]: E0216 21:23:16.635172 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" containerName="kube-multus-additional-cni-plugins" Feb 16 21:23:16.635246 master-0 kubenswrapper[38936]: I0216 21:23:16.635178 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" containerName="kube-multus-additional-cni-plugins" Feb 16 21:23:16.635472 master-0 kubenswrapper[38936]: I0216 21:23:16.635308 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" containerName="kube-rbac-proxy" Feb 16 21:23:16.635472 master-0 kubenswrapper[38936]: I0216 21:23:16.635351 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" containerName="kube-multus-additional-cni-plugins" Feb 16 21:23:16.635472 master-0 kubenswrapper[38936]: I0216 21:23:16.635367 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" containerName="multus-admission-controller" Feb 16 21:23:16.635472 master-0 kubenswrapper[38936]: I0216 21:23:16.635379 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8a26db-5a90-4da9-9074-33256ef17100" containerName="installer" Feb 16 21:23:16.635810 master-0 kubenswrapper[38936]: I0216 21:23:16.635779 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:23:16.658561 master-0 kubenswrapper[38936]: I0216 21:23:16.657551 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 16 21:23:16.664144 master-0 kubenswrapper[38936]: I0216 21:23:16.663167 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-sdjl5" Feb 16 21:23:16.664734 master-0 kubenswrapper[38936]: I0216 21:23:16.664598 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-fgr2n"] Feb 16 21:23:16.683243 master-0 kubenswrapper[38936]: I0216 21:23:16.683191 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 16 21:23:16.686843 master-0 kubenswrapper[38936]: W0216 21:23:16.686799 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0186fdbf_d367_4bc6_816a_bda2816b599e.slice/crio-e01bbdcf57ae3f62623de5153c18877d9766c3648efcdff22c4f6ffce02f0b37 WatchSource:0}: Error finding container e01bbdcf57ae3f62623de5153c18877d9766c3648efcdff22c4f6ffce02f0b37: Status 404 returned error can't find the container with id e01bbdcf57ae3f62623de5153c18877d9766c3648efcdff22c4f6ffce02f0b37 Feb 16 21:23:16.743700 master-0 kubenswrapper[38936]: I0216 21:23:16.742269 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=5.742251096 podStartE2EDuration="5.742251096s" podCreationTimestamp="2026-02-16 21:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:23:16.740403917 +0000 UTC m=+27.092407279" watchObservedRunningTime="2026-02-16 21:23:16.742251096 +0000 UTC m=+27.094254448" Feb 16 21:23:16.822702 master-0 kubenswrapper[38936]: I0216 21:23:16.804770 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-var-lock\") pod \"installer-5-master-0\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:23:16.822702 master-0 kubenswrapper[38936]: I0216 21:23:16.804810 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b69c32d-3b8d-44d6-8547-9e682d069266-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:23:16.822702 master-0 kubenswrapper[38936]: I0216 21:23:16.804864 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:23:16.850377 master-0 kubenswrapper[38936]: I0216 21:23:16.849222 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-z46jt"] Feb 16 21:23:16.853070 master-0 kubenswrapper[38936]: I0216 21:23:16.852799 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-z46jt"] Feb 16 21:23:16.906244 master-0 kubenswrapper[38936]: I0216 21:23:16.906115 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:23:16.906244 master-0 kubenswrapper[38936]: I0216 21:23:16.906213 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-var-lock\") pod \"installer-5-master-0\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:23:16.906474 master-0 kubenswrapper[38936]: I0216 21:23:16.906285 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:23:16.906474 master-0 kubenswrapper[38936]: I0216 21:23:16.906346 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b69c32d-3b8d-44d6-8547-9e682d069266-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:23:16.906474 master-0 kubenswrapper[38936]: I0216 21:23:16.906456 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-var-lock\") pod \"installer-5-master-0\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:23:16.945736 master-0 kubenswrapper[38936]: I0216 21:23:16.945156 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b69c32d-3b8d-44d6-8547-9e682d069266-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:23:16.968519 master-0 kubenswrapper[38936]: I0216 21:23:16.968467 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:23:17.418940 master-0 kubenswrapper[38936]: I0216 21:23:17.418882 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" event={"ID":"0186fdbf-d367-4bc6-816a-bda2816b599e","Type":"ContainerStarted","Data":"e01bbdcf57ae3f62623de5153c18877d9766c3648efcdff22c4f6ffce02f0b37"} Feb 16 21:23:17.419952 master-0 kubenswrapper[38936]: I0216 21:23:17.419912 38936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:23:17.422356 master-0 kubenswrapper[38936]: I0216 21:23:17.422148 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:17.423134 master-0 kubenswrapper[38936]: I0216 21:23:17.423107 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:17.428402 master-0 kubenswrapper[38936]: I0216 21:23:17.428105 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-864ddd5f56-z4bnk" Feb 16 21:23:17.523129 master-0 kubenswrapper[38936]: I0216 21:23:17.522084 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:17.523129 master-0 kubenswrapper[38936]: E0216 21:23:17.522934 38936 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:17.523129 master-0 kubenswrapper[38936]: E0216 21:23:17.522955 38936 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:17.523129 master-0 kubenswrapper[38936]: E0216 21:23:17.523000 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access podName:1f8a26db-5a90-4da9-9074-33256ef17100 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:19.522984467 +0000 UTC m=+29.874987899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access") pod "installer-1-retry-1-master-0" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:17.708286 master-0 kubenswrapper[38936]: I0216 21:23:17.708187 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-k8h7h"] Feb 16 21:23:17.709419 master-0 kubenswrapper[38936]: I0216 21:23:17.709376 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-k8h7h"] Feb 16 21:23:17.885446 master-0 kubenswrapper[38936]: I0216 21:23:17.885393 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e3ccb9a-4a5d-4a04-8334-b1e303b215a5" path="/var/lib/kubelet/pods/3e3ccb9a-4a5d-4a04-8334-b1e303b215a5/volumes" Feb 16 21:23:17.886410 master-0 kubenswrapper[38936]: I0216 21:23:17.886364 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27de289-c0f9-47ff-aac6-15b7bc1b178a" path="/var/lib/kubelet/pods/b27de289-c0f9-47ff-aac6-15b7bc1b178a/volumes" Feb 16 21:23:17.913134 master-0 kubenswrapper[38936]: I0216 21:23:17.912756 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=36.912716096 podStartE2EDuration="36.912716096s" podCreationTimestamp="2026-02-16 21:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:23:17.909207903 +0000 UTC m=+28.261211265" watchObservedRunningTime="2026-02-16 21:23:17.912716096 +0000 UTC m=+28.264719468" Feb 16 21:23:17.944027 master-0 kubenswrapper[38936]: I0216 21:23:17.943881 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 16 21:23:17.953053 master-0 kubenswrapper[38936]: W0216 21:23:17.953020 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5b69c32d_3b8d_44d6_8547_9e682d069266.slice/crio-411a960f6aeb9c455e10a191147d5299c945d4a1bf89b19258cad3d8ada4d280 WatchSource:0}: Error finding container 411a960f6aeb9c455e10a191147d5299c945d4a1bf89b19258cad3d8ada4d280: Status 404 returned error can't find the container with id 411a960f6aeb9c455e10a191147d5299c945d4a1bf89b19258cad3d8ada4d280 Feb 16 21:23:18.440705 master-0 kubenswrapper[38936]: I0216 21:23:18.440438 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" event={"ID":"277c6354-bff9-407b-ad97-5fdfc7f43116","Type":"ContainerStarted","Data":"a85dfa1e72b24b39560aad91c1465eb5e2ff138116e117ad454a9bb26ca0c043"} Feb 16 21:23:18.442865 master-0 kubenswrapper[38936]: I0216 21:23:18.442760 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" Feb 16 21:23:18.445681 master-0 kubenswrapper[38936]: I0216 21:23:18.445627 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5b69c32d-3b8d-44d6-8547-9e682d069266","Type":"ContainerStarted","Data":"78d192bb958fe17b5046a85c27f8d4b6856a2f491dd66e1f66156a74c8c8a8c3"} Feb 16 21:23:18.445681 master-0 kubenswrapper[38936]: I0216 21:23:18.445682 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5b69c32d-3b8d-44d6-8547-9e682d069266","Type":"ContainerStarted","Data":"411a960f6aeb9c455e10a191147d5299c945d4a1bf89b19258cad3d8ada4d280"} Feb 16 21:23:18.453236 master-0 kubenswrapper[38936]: I0216 21:23:18.453084 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" Feb 16 21:23:19.010318 master-0 kubenswrapper[38936]: I0216 21:23:19.009629 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp" podStartSLOduration=15.21747189 podStartE2EDuration="17.009604599s" podCreationTimestamp="2026-02-16 21:23:02 +0000 UTC" firstStartedPulling="2026-02-16 21:23:15.786841595 +0000 UTC m=+26.138844967" lastFinishedPulling="2026-02-16 21:23:17.578974314 +0000 UTC m=+27.930977676" observedRunningTime="2026-02-16 21:23:19.00662255 +0000 UTC m=+29.358625932" watchObservedRunningTime="2026-02-16 21:23:19.009604599 +0000 UTC m=+29.361607961" Feb 16 21:23:19.075026 master-0 kubenswrapper[38936]: I0216 21:23:19.074948 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=3.074930755 podStartE2EDuration="3.074930755s" podCreationTimestamp="2026-02-16 21:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:23:19.030895328 +0000 UTC m=+29.382898690" watchObservedRunningTime="2026-02-16 21:23:19.074930755 +0000 UTC m=+29.426934117" Feb 16 21:23:19.531695 master-0 kubenswrapper[38936]: I0216 21:23:19.531630 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6" Feb 16 21:23:19.560865 master-0 kubenswrapper[38936]: I0216 21:23:19.560756 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:19.561130 master-0 kubenswrapper[38936]: E0216 21:23:19.561082 38936 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:19.561287 master-0 kubenswrapper[38936]: E0216 21:23:19.561256 38936 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:19.561344 master-0 kubenswrapper[38936]: E0216 21:23:19.561319 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access podName:1f8a26db-5a90-4da9-9074-33256ef17100 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:23.561301278 +0000 UTC m=+33.913304640 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access") pod "installer-1-retry-1-master-0" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:20.127222 master-0 kubenswrapper[38936]: I0216 21:23:20.127167 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6bdb76b9b7-z46x6" Feb 16 21:23:21.469161 master-0 kubenswrapper[38936]: I0216 21:23:21.469085 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" event={"ID":"0186fdbf-d367-4bc6-816a-bda2816b599e","Type":"ContainerStarted","Data":"e130ab6b852569cae3636d84dab1124769d352b4c8d5ba0fcaaf999186746785"} Feb 16 21:23:21.469705 master-0 kubenswrapper[38936]: I0216 21:23:21.469295 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:21.488391 master-0 kubenswrapper[38936]: I0216 21:23:21.488318 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" podStartSLOduration=18.464271337 podStartE2EDuration="22.488301752s" podCreationTimestamp="2026-02-16 21:22:59 +0000 UTC" firstStartedPulling="2026-02-16 21:23:16.69042315 +0000 UTC m=+27.042426512" lastFinishedPulling="2026-02-16 21:23:20.714453565 +0000 UTC m=+31.066456927" observedRunningTime="2026-02-16 21:23:21.487307555 +0000 UTC m=+31.839310917" watchObservedRunningTime="2026-02-16 21:23:21.488301752 +0000 UTC m=+31.840305114" Feb 16 21:23:21.564756 master-0 kubenswrapper[38936]: I0216 21:23:21.564709 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 16 21:23:21.567563 master-0 kubenswrapper[38936]: I0216 21:23:21.567523 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-7777d5cc66-fgr2n" Feb 16 21:23:21.570471 master-0 kubenswrapper[38936]: I0216 21:23:21.570416 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:23:21.576641 master-0 kubenswrapper[38936]: I0216 21:23:21.576606 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 16 21:23:21.602250 master-0 kubenswrapper[38936]: I0216 21:23:21.602177 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:21.603519 master-0 kubenswrapper[38936]: I0216 21:23:21.603486 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:21.621199 master-0 kubenswrapper[38936]: I0216 21:23:21.621146 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:21.796752 master-0 kubenswrapper[38936]: I0216 21:23:21.796709 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-dcd7b7d95-xzx78"] Feb 16 21:23:21.797799 master-0 kubenswrapper[38936]: I0216 21:23:21.797782 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-xzx78" Feb 16 21:23:21.802395 master-0 kubenswrapper[38936]: I0216 21:23:21.802348 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 21:23:21.802642 master-0 kubenswrapper[38936]: I0216 21:23:21.802617 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 21:23:21.802851 master-0 kubenswrapper[38936]: I0216 21:23:21.802827 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-zhm6n" Feb 16 21:23:21.818301 master-0 kubenswrapper[38936]: I0216 21:23:21.818246 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-dcd7b7d95-xzx78"] Feb 16 21:23:21.820551 master-0 kubenswrapper[38936]: I0216 21:23:21.820482 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjw5r\" (UniqueName: \"kubernetes.io/projected/42c29d0d-12cf-4737-83ed-1dcfe74b2b26-kube-api-access-cjw5r\") pod \"downloads-dcd7b7d95-xzx78\" (UID: \"42c29d0d-12cf-4737-83ed-1dcfe74b2b26\") " pod="openshift-console/downloads-dcd7b7d95-xzx78" Feb 16 21:23:21.921599 master-0 kubenswrapper[38936]: I0216 21:23:21.921519 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjw5r\" (UniqueName: \"kubernetes.io/projected/42c29d0d-12cf-4737-83ed-1dcfe74b2b26-kube-api-access-cjw5r\") pod \"downloads-dcd7b7d95-xzx78\" (UID: \"42c29d0d-12cf-4737-83ed-1dcfe74b2b26\") " pod="openshift-console/downloads-dcd7b7d95-xzx78" Feb 16 21:23:21.937357 master-0 kubenswrapper[38936]: I0216 21:23:21.937317 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjw5r\" (UniqueName: \"kubernetes.io/projected/42c29d0d-12cf-4737-83ed-1dcfe74b2b26-kube-api-access-cjw5r\") pod \"downloads-dcd7b7d95-xzx78\" (UID: \"42c29d0d-12cf-4737-83ed-1dcfe74b2b26\") " pod="openshift-console/downloads-dcd7b7d95-xzx78" Feb 16 21:23:22.115896 master-0 kubenswrapper[38936]: I0216 21:23:22.115776 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-xzx78" Feb 16 21:23:22.488792 master-0 kubenswrapper[38936]: I0216 21:23:22.487005 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:22.611619 master-0 kubenswrapper[38936]: I0216 21:23:22.611557 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-dcd7b7d95-xzx78"] Feb 16 21:23:22.620197 master-0 kubenswrapper[38936]: W0216 21:23:22.620150 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c29d0d_12cf_4737_83ed_1dcfe74b2b26.slice/crio-15efd7b5e6da5b7d13fd7550028d463cd6c37f84e3ffdc6c072aa5be8c627570 WatchSource:0}: Error finding container 15efd7b5e6da5b7d13fd7550028d463cd6c37f84e3ffdc6c072aa5be8c627570: Status 404 returned error can't find the container with id 15efd7b5e6da5b7d13fd7550028d463cd6c37f84e3ffdc6c072aa5be8c627570 Feb 16 21:23:23.483580 master-0 kubenswrapper[38936]: I0216 21:23:23.483514 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-xzx78" event={"ID":"42c29d0d-12cf-4737-83ed-1dcfe74b2b26","Type":"ContainerStarted","Data":"15efd7b5e6da5b7d13fd7550028d463cd6c37f84e3ffdc6c072aa5be8c627570"} Feb 16 21:23:23.648025 master-0 kubenswrapper[38936]: I0216 21:23:23.647881 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:23.648537 master-0 kubenswrapper[38936]: E0216 21:23:23.648118 38936 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:23.648537 master-0 kubenswrapper[38936]: E0216 21:23:23.648155 38936 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:23.648537 master-0 kubenswrapper[38936]: E0216 21:23:23.648211 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access podName:1f8a26db-5a90-4da9-9074-33256ef17100 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:31.648189873 +0000 UTC m=+42.000193235 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access") pod "installer-1-retry-1-master-0" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:25.147736 master-0 kubenswrapper[38936]: I0216 21:23:25.147658 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 16 21:23:25.148750 master-0 kubenswrapper[38936]: I0216 21:23:25.148715 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:23:25.158785 master-0 kubenswrapper[38936]: I0216 21:23:25.153876 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-czn7h" Feb 16 21:23:25.158785 master-0 kubenswrapper[38936]: I0216 21:23:25.155320 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 21:23:25.158785 master-0 kubenswrapper[38936]: I0216 21:23:25.156776 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 16 21:23:25.270010 master-0 kubenswrapper[38936]: I0216 21:23:25.269947 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-var-lock\") pod \"installer-3-master-0\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:23:25.270215 master-0 kubenswrapper[38936]: I0216 21:23:25.270043 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kube-api-access\") pod \"installer-3-master-0\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:23:25.270215 master-0 kubenswrapper[38936]: I0216 21:23:25.270122 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:23:25.390318 master-0 kubenswrapper[38936]: I0216 21:23:25.390256 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kube-api-access\") pod \"installer-3-master-0\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:23:25.390318 master-0 kubenswrapper[38936]: I0216 21:23:25.390328 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:23:25.390556 master-0 kubenswrapper[38936]: I0216 21:23:25.390389 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-var-lock\") pod \"installer-3-master-0\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:23:25.390556 master-0 kubenswrapper[38936]: I0216 21:23:25.390457 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-var-lock\") pod \"installer-3-master-0\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:23:25.390617 master-0 kubenswrapper[38936]: I0216 21:23:25.390536 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:23:25.408624 master-0 kubenswrapper[38936]: I0216 21:23:25.408403 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kube-api-access\") pod \"installer-3-master-0\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:23:25.474753 master-0 kubenswrapper[38936]: I0216 21:23:25.474676 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:23:25.476745 master-0 kubenswrapper[38936]: I0216 21:23:25.476680 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-69wj8" Feb 16 21:23:25.796692 master-0 kubenswrapper[38936]: I0216 21:23:25.796627 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j5kwc" Feb 16 21:23:25.798255 master-0 kubenswrapper[38936]: I0216 21:23:25.797885 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sn2nh" Feb 16 21:23:25.805628 master-0 kubenswrapper[38936]: I0216 21:23:25.802767 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-blw8x" Feb 16 21:23:25.968192 master-0 kubenswrapper[38936]: I0216 21:23:25.968140 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 16 21:23:25.974789 master-0 kubenswrapper[38936]: W0216 21:23:25.974756 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podff084640_8e23_45e8_9d0b_6aa3b030c51f.slice/crio-03eb6bd070b6a14b651c7be05b265195ef87827bc61bc2b2e1baa12783467bea WatchSource:0}: Error finding container 03eb6bd070b6a14b651c7be05b265195ef87827bc61bc2b2e1baa12783467bea: Status 404 returned error can't find the container with id 03eb6bd070b6a14b651c7be05b265195ef87827bc61bc2b2e1baa12783467bea Feb 16 21:23:26.509475 master-0 kubenswrapper[38936]: I0216 21:23:26.509417 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"ff084640-8e23-45e8-9d0b-6aa3b030c51f","Type":"ContainerStarted","Data":"df15826c9f58eefd45a84d42353553de2b39c005890ff3061c9c0aea9f1e2f96"} Feb 16 21:23:26.509475 master-0 kubenswrapper[38936]: I0216 21:23:26.509465 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"ff084640-8e23-45e8-9d0b-6aa3b030c51f","Type":"ContainerStarted","Data":"03eb6bd070b6a14b651c7be05b265195ef87827bc61bc2b2e1baa12783467bea"} Feb 16 21:23:28.699048 master-0 kubenswrapper[38936]: I0216 21:23:28.698774 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=3.69875761 podStartE2EDuration="3.69875761s" podCreationTimestamp="2026-02-16 21:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:23:26.53441653 +0000 UTC m=+36.886419892" watchObservedRunningTime="2026-02-16 21:23:28.69875761 +0000 UTC m=+39.050760972" Feb 16 21:23:28.700739 master-0 kubenswrapper[38936]: I0216 21:23:28.700639 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-84f5b46974-6pcrm"] Feb 16 21:23:28.701720 master-0 kubenswrapper[38936]: I0216 21:23:28.701693 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.705912 master-0 kubenswrapper[38936]: I0216 21:23:28.704004 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-qjg6f" Feb 16 21:23:28.705912 master-0 kubenswrapper[38936]: I0216 21:23:28.704210 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 21:23:28.705912 master-0 kubenswrapper[38936]: I0216 21:23:28.704223 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 21:23:28.705912 master-0 kubenswrapper[38936]: I0216 21:23:28.704325 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 21:23:28.708150 master-0 kubenswrapper[38936]: I0216 21:23:28.708110 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 21:23:28.708341 master-0 kubenswrapper[38936]: I0216 21:23:28.708314 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 21:23:28.727874 master-0 kubenswrapper[38936]: I0216 21:23:28.727645 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84f5b46974-6pcrm"] Feb 16 21:23:28.743130 master-0 kubenswrapper[38936]: I0216 21:23:28.743071 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-console-config\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.743336 master-0 kubenswrapper[38936]: I0216 21:23:28.743245 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-service-ca\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.743389 master-0 kubenswrapper[38936]: I0216 21:23:28.743331 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-oauth-serving-cert\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.743519 master-0 kubenswrapper[38936]: I0216 21:23:28.743444 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-serving-cert\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.743519 master-0 kubenswrapper[38936]: I0216 21:23:28.743503 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqjlq\" (UniqueName: \"kubernetes.io/projected/6f9ac325-9b82-4250-922d-40265fff9322-kube-api-access-gqjlq\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.743624 master-0 kubenswrapper[38936]: I0216 21:23:28.743593 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-oauth-config\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.845125 master-0 kubenswrapper[38936]: I0216 21:23:28.845065 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-serving-cert\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.845408 master-0 kubenswrapper[38936]: I0216 21:23:28.845140 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqjlq\" (UniqueName: \"kubernetes.io/projected/6f9ac325-9b82-4250-922d-40265fff9322-kube-api-access-gqjlq\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.845408 master-0 kubenswrapper[38936]: I0216 21:23:28.845172 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-oauth-config\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.845408 master-0 kubenswrapper[38936]: I0216 21:23:28.845222 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-console-config\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.845408 master-0 kubenswrapper[38936]: I0216 21:23:28.845246 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-service-ca\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.845408 master-0 kubenswrapper[38936]: I0216 21:23:28.845270 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-oauth-serving-cert\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.846234 master-0 kubenswrapper[38936]: I0216 21:23:28.846132 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-oauth-serving-cert\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.847055 master-0 kubenswrapper[38936]: I0216 21:23:28.846985 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-service-ca\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.847193 master-0 kubenswrapper[38936]: I0216 21:23:28.847159 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-console-config\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.850434 master-0 kubenswrapper[38936]: I0216 21:23:28.849313 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-oauth-config\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.850434 master-0 kubenswrapper[38936]: I0216 21:23:28.849482 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-serving-cert\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:28.864251 master-0 kubenswrapper[38936]: I0216 21:23:28.864154 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqjlq\" (UniqueName: \"kubernetes.io/projected/6f9ac325-9b82-4250-922d-40265fff9322-kube-api-access-gqjlq\") pod \"console-84f5b46974-6pcrm\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:29.030295 master-0 kubenswrapper[38936]: I0216 21:23:29.030206 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:29.508906 master-0 kubenswrapper[38936]: I0216 21:23:29.508850 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84f5b46974-6pcrm"] Feb 16 21:23:29.514738 master-0 kubenswrapper[38936]: W0216 21:23:29.514685 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f9ac325_9b82_4250_922d_40265fff9322.slice/crio-cbe6879bbdea1991c514284b01ce06f4c9ad1b0668ed16f4d2ef44c0793724d2 WatchSource:0}: Error finding container cbe6879bbdea1991c514284b01ce06f4c9ad1b0668ed16f4d2ef44c0793724d2: Status 404 returned error can't find the container with id cbe6879bbdea1991c514284b01ce06f4c9ad1b0668ed16f4d2ef44c0793724d2 Feb 16 21:23:29.538920 master-0 kubenswrapper[38936]: I0216 21:23:29.538860 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84f5b46974-6pcrm" event={"ID":"6f9ac325-9b82-4250-922d-40265fff9322","Type":"ContainerStarted","Data":"cbe6879bbdea1991c514284b01ce06f4c9ad1b0668ed16f4d2ef44c0793724d2"} Feb 16 21:23:29.869261 master-0 kubenswrapper[38936]: I0216 21:23:29.869139 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:29.869766 master-0 kubenswrapper[38936]: I0216 21:23:29.869335 38936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:23:29.898502 master-0 kubenswrapper[38936]: I0216 21:23:29.898446 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z8h4n" Feb 16 21:23:31.692126 master-0 kubenswrapper[38936]: I0216 21:23:31.692063 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:31.692627 master-0 kubenswrapper[38936]: E0216 21:23:31.692243 38936 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:31.692627 master-0 kubenswrapper[38936]: E0216 21:23:31.692266 38936 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:31.692627 master-0 kubenswrapper[38936]: E0216 21:23:31.692320 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access podName:1f8a26db-5a90-4da9-9074-33256ef17100 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:47.692302616 +0000 UTC m=+58.044305978 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access") pod "installer-1-retry-1-master-0" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:31.991638 master-0 kubenswrapper[38936]: I0216 21:23:31.991478 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7dcddfd95-nldpw"] Feb 16 21:23:31.992769 master-0 kubenswrapper[38936]: I0216 21:23:31.992712 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.000817 master-0 kubenswrapper[38936]: I0216 21:23:32.000771 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 21:23:32.003572 master-0 kubenswrapper[38936]: I0216 21:23:32.003486 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-console-config\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.003750 master-0 kubenswrapper[38936]: I0216 21:23:32.003691 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-oauth-serving-cert\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.003891 master-0 kubenswrapper[38936]: I0216 21:23:32.003838 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-service-ca\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.003932 master-0 kubenswrapper[38936]: I0216 21:23:32.003899 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-oauth-config\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.004096 master-0 kubenswrapper[38936]: I0216 21:23:32.003989 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwx2q\" (UniqueName: \"kubernetes.io/projected/503aa866-c355-434a-a39c-fa6072733ea8-kube-api-access-dwx2q\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.004170 master-0 kubenswrapper[38936]: I0216 21:23:32.004145 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-trusted-ca-bundle\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.004249 master-0 kubenswrapper[38936]: I0216 21:23:32.004231 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-serving-cert\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.113945 master-0 kubenswrapper[38936]: I0216 21:23:32.113759 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-oauth-serving-cert\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.113945 master-0 kubenswrapper[38936]: I0216 21:23:32.113837 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-service-ca\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.113945 master-0 kubenswrapper[38936]: I0216 21:23:32.113857 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-oauth-config\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.113945 master-0 kubenswrapper[38936]: I0216 21:23:32.113874 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwx2q\" (UniqueName: \"kubernetes.io/projected/503aa866-c355-434a-a39c-fa6072733ea8-kube-api-access-dwx2q\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.113945 master-0 kubenswrapper[38936]: I0216 21:23:32.113902 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-trusted-ca-bundle\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.113945 master-0 kubenswrapper[38936]: I0216 21:23:32.113943 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-serving-cert\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.114400 master-0 kubenswrapper[38936]: I0216 21:23:32.114008 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-console-config\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.115037 master-0 kubenswrapper[38936]: I0216 21:23:32.115016 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-console-config\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.115714 master-0 kubenswrapper[38936]: I0216 21:23:32.115671 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7dcddfd95-nldpw"] Feb 16 21:23:32.116273 master-0 kubenswrapper[38936]: I0216 21:23:32.116231 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-oauth-serving-cert\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.116760 master-0 kubenswrapper[38936]: I0216 21:23:32.116739 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-service-ca\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.128147 master-0 kubenswrapper[38936]: I0216 21:23:32.127879 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-serving-cert\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.131085 master-0 kubenswrapper[38936]: I0216 21:23:32.131051 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-oauth-config\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.138813 master-0 kubenswrapper[38936]: I0216 21:23:32.138624 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-trusted-ca-bundle\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.169246 master-0 kubenswrapper[38936]: I0216 21:23:32.168959 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwx2q\" (UniqueName: \"kubernetes.io/projected/503aa866-c355-434a-a39c-fa6072733ea8-kube-api-access-dwx2q\") pod \"console-7dcddfd95-nldpw\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.319700 master-0 kubenswrapper[38936]: I0216 21:23:32.319476 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:32.910962 master-0 kubenswrapper[38936]: I0216 21:23:32.910096 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7dcddfd95-nldpw"] Feb 16 21:23:33.010434 master-0 kubenswrapper[38936]: I0216 21:23:33.010371 38936 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 21:23:33.010774 master-0 kubenswrapper[38936]: I0216 21:23:33.010718 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="5b26dae9694224e04f0cdc3841408c63" containerName="startup-monitor" containerID="cri-o://1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89" gracePeriod=5 Feb 16 21:23:35.780528 master-0 kubenswrapper[38936]: I0216 21:23:35.780465 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:23:36.150831 master-0 kubenswrapper[38936]: W0216 21:23:36.150520 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod503aa866_c355_434a_a39c_fa6072733ea8.slice/crio-af571952b96712d423a80c1e7802ea002bb54b6b874b81c738e367b6a15e642a WatchSource:0}: Error finding container af571952b96712d423a80c1e7802ea002bb54b6b874b81c738e367b6a15e642a: Status 404 returned error can't find the container with id af571952b96712d423a80c1e7802ea002bb54b6b874b81c738e367b6a15e642a Feb 16 21:23:36.618319 master-0 kubenswrapper[38936]: I0216 21:23:36.618238 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dcddfd95-nldpw" event={"ID":"503aa866-c355-434a-a39c-fa6072733ea8","Type":"ContainerStarted","Data":"af571952b96712d423a80c1e7802ea002bb54b6b874b81c738e367b6a15e642a"} Feb 16 21:23:36.620312 master-0 kubenswrapper[38936]: I0216 21:23:36.620274 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84f5b46974-6pcrm" event={"ID":"6f9ac325-9b82-4250-922d-40265fff9322","Type":"ContainerStarted","Data":"c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19"} Feb 16 21:23:36.641268 master-0 kubenswrapper[38936]: I0216 21:23:36.640422 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-84f5b46974-6pcrm" podStartSLOduration=1.961160257 podStartE2EDuration="8.640396853s" podCreationTimestamp="2026-02-16 21:23:28 +0000 UTC" firstStartedPulling="2026-02-16 21:23:29.516801838 +0000 UTC m=+39.868805200" lastFinishedPulling="2026-02-16 21:23:36.196038424 +0000 UTC m=+46.548041796" observedRunningTime="2026-02-16 21:23:36.639572701 +0000 UTC m=+46.991576063" watchObservedRunningTime="2026-02-16 21:23:36.640396853 +0000 UTC m=+46.992400215" Feb 16 21:23:37.630262 master-0 kubenswrapper[38936]: I0216 21:23:37.630160 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dcddfd95-nldpw" event={"ID":"503aa866-c355-434a-a39c-fa6072733ea8","Type":"ContainerStarted","Data":"af18d2993ae3387589e2da61f5c3ac7d0eac8cab034fa7f17941a3d802dd5feb"} Feb 16 21:23:37.656030 master-0 kubenswrapper[38936]: I0216 21:23:37.654568 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7dcddfd95-nldpw" podStartSLOduration=6.267363693 podStartE2EDuration="6.654548954s" podCreationTimestamp="2026-02-16 21:23:31 +0000 UTC" firstStartedPulling="2026-02-16 21:23:36.153475646 +0000 UTC m=+46.505479038" lastFinishedPulling="2026-02-16 21:23:36.540660937 +0000 UTC m=+46.892664299" observedRunningTime="2026-02-16 21:23:37.651158904 +0000 UTC m=+48.003162286" watchObservedRunningTime="2026-02-16 21:23:37.654548954 +0000 UTC m=+48.006552316" Feb 16 21:23:38.590332 master-0 kubenswrapper[38936]: I0216 21:23:38.590283 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5b26dae9694224e04f0cdc3841408c63/startup-monitor/0.log" Feb 16 21:23:38.590558 master-0 kubenswrapper[38936]: I0216 21:23:38.590360 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:38.633056 master-0 kubenswrapper[38936]: I0216 21:23:38.632949 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"5b26dae9694224e04f0cdc3841408c63\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " Feb 16 21:23:38.633592 master-0 kubenswrapper[38936]: I0216 21:23:38.633076 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"5b26dae9694224e04f0cdc3841408c63\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " Feb 16 21:23:38.633592 master-0 kubenswrapper[38936]: I0216 21:23:38.633409 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"5b26dae9694224e04f0cdc3841408c63\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " Feb 16 21:23:38.633592 master-0 kubenswrapper[38936]: I0216 21:23:38.633463 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"5b26dae9694224e04f0cdc3841408c63\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " Feb 16 21:23:38.635064 master-0 kubenswrapper[38936]: I0216 21:23:38.634337 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock" (OuterVolumeSpecName: "var-lock") pod "5b26dae9694224e04f0cdc3841408c63" (UID: "5b26dae9694224e04f0cdc3841408c63"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:23:38.635064 master-0 kubenswrapper[38936]: I0216 21:23:38.634422 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log" (OuterVolumeSpecName: "var-log") pod "5b26dae9694224e04f0cdc3841408c63" (UID: "5b26dae9694224e04f0cdc3841408c63"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:23:38.635064 master-0 kubenswrapper[38936]: I0216 21:23:38.634500 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests" (OuterVolumeSpecName: "manifests") pod "5b26dae9694224e04f0cdc3841408c63" (UID: "5b26dae9694224e04f0cdc3841408c63"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:23:38.639071 master-0 kubenswrapper[38936]: I0216 21:23:38.638947 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "5b26dae9694224e04f0cdc3841408c63" (UID: "5b26dae9694224e04f0cdc3841408c63"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:23:38.641123 master-0 kubenswrapper[38936]: I0216 21:23:38.641048 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5b26dae9694224e04f0cdc3841408c63/startup-monitor/0.log" Feb 16 21:23:38.641240 master-0 kubenswrapper[38936]: I0216 21:23:38.641165 38936 generic.go:334] "Generic (PLEG): container finished" podID="5b26dae9694224e04f0cdc3841408c63" containerID="1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89" exitCode=137 Feb 16 21:23:38.641360 master-0 kubenswrapper[38936]: I0216 21:23:38.641314 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:23:38.641611 master-0 kubenswrapper[38936]: I0216 21:23:38.641463 38936 scope.go:117] "RemoveContainer" containerID="1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89" Feb 16 21:23:38.688349 master-0 kubenswrapper[38936]: I0216 21:23:38.688287 38936 scope.go:117] "RemoveContainer" containerID="1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89" Feb 16 21:23:38.689232 master-0 kubenswrapper[38936]: E0216 21:23:38.689071 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89\": container with ID starting with 1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89 not found: ID does not exist" containerID="1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89" Feb 16 21:23:38.689232 master-0 kubenswrapper[38936]: I0216 21:23:38.689166 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89"} err="failed to get container status \"1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89\": rpc error: code = NotFound desc = could not find container \"1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89\": container with ID starting with 1a635028f55042697d014855fe31fff8d153cd9f1c72d44b806de44a3d1bef89 not found: ID does not exist" Feb 16 21:23:38.736566 master-0 kubenswrapper[38936]: I0216 21:23:38.736307 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"5b26dae9694224e04f0cdc3841408c63\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " Feb 16 21:23:38.737682 master-0 kubenswrapper[38936]: I0216 21:23:38.736447 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5b26dae9694224e04f0cdc3841408c63" (UID: "5b26dae9694224e04f0cdc3841408c63"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:23:38.737682 master-0 kubenswrapper[38936]: I0216 21:23:38.737058 38936 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:38.737682 master-0 kubenswrapper[38936]: I0216 21:23:38.737100 38936 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:38.737682 master-0 kubenswrapper[38936]: I0216 21:23:38.737117 38936 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:38.737682 master-0 kubenswrapper[38936]: I0216 21:23:38.737133 38936 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:38.737682 master-0 kubenswrapper[38936]: I0216 21:23:38.737152 38936 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:38.981818 master-0 kubenswrapper[38936]: I0216 21:23:38.981746 38936 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="becfdef2-d782-47cd-895b-184205d8a5cf" Feb 16 21:23:39.030733 master-0 kubenswrapper[38936]: I0216 21:23:39.030676 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:39.030733 master-0 kubenswrapper[38936]: I0216 21:23:39.030732 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:23:39.032184 master-0 kubenswrapper[38936]: I0216 21:23:39.032129 38936 patch_prober.go:28] interesting pod/console-84f5b46974-6pcrm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Feb 16 21:23:39.032184 master-0 kubenswrapper[38936]: I0216 21:23:39.032174 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-84f5b46974-6pcrm" podUID="6f9ac325-9b82-4250-922d-40265fff9322" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Feb 16 21:23:39.343040 master-0 kubenswrapper[38936]: I0216 21:23:39.342967 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr"] Feb 16 21:23:39.343309 master-0 kubenswrapper[38936]: E0216 21:23:39.343283 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b26dae9694224e04f0cdc3841408c63" containerName="startup-monitor" Feb 16 21:23:39.343309 master-0 kubenswrapper[38936]: I0216 21:23:39.343303 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b26dae9694224e04f0cdc3841408c63" containerName="startup-monitor" Feb 16 21:23:39.343506 master-0 kubenswrapper[38936]: I0216 21:23:39.343480 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b26dae9694224e04f0cdc3841408c63" containerName="startup-monitor" Feb 16 21:23:39.351042 master-0 kubenswrapper[38936]: I0216 21:23:39.344098 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.351042 master-0 kubenswrapper[38936]: I0216 21:23:39.350763 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 21:23:39.351042 master-0 kubenswrapper[38936]: I0216 21:23:39.351015 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 21:23:39.351364 master-0 kubenswrapper[38936]: I0216 21:23:39.351163 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 21:23:39.351364 master-0 kubenswrapper[38936]: I0216 21:23:39.351345 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 21:23:39.354113 master-0 kubenswrapper[38936]: I0216 21:23:39.351483 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 21:23:39.354113 master-0 kubenswrapper[38936]: I0216 21:23:39.351987 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 21:23:39.354113 master-0 kubenswrapper[38936]: I0216 21:23:39.352316 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 21:23:39.354113 master-0 kubenswrapper[38936]: I0216 21:23:39.352906 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 21:23:39.354113 master-0 kubenswrapper[38936]: I0216 21:23:39.353054 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-2t2pz" Feb 16 21:23:39.354113 master-0 kubenswrapper[38936]: I0216 21:23:39.353181 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 21:23:39.354113 master-0 kubenswrapper[38936]: I0216 21:23:39.353329 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 21:23:39.354113 master-0 kubenswrapper[38936]: I0216 21:23:39.353449 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 21:23:39.361325 master-0 kubenswrapper[38936]: I0216 21:23:39.361282 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 21:23:39.368630 master-0 kubenswrapper[38936]: I0216 21:23:39.368345 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr"] Feb 16 21:23:39.382832 master-0 kubenswrapper[38936]: I0216 21:23:39.382791 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 21:23:39.450843 master-0 kubenswrapper[38936]: I0216 21:23:39.450749 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.450843 master-0 kubenswrapper[38936]: I0216 21:23:39.450827 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-login\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.451108 master-0 kubenswrapper[38936]: I0216 21:23:39.450938 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.451108 master-0 kubenswrapper[38936]: I0216 21:23:39.451057 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.451108 master-0 kubenswrapper[38936]: I0216 21:23:39.451095 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-error\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.451224 master-0 kubenswrapper[38936]: I0216 21:23:39.451112 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.451224 master-0 kubenswrapper[38936]: I0216 21:23:39.451155 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.451224 master-0 kubenswrapper[38936]: I0216 21:23:39.451189 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-service-ca\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.451499 master-0 kubenswrapper[38936]: I0216 21:23:39.451411 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-router-certs\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.451960 master-0 kubenswrapper[38936]: I0216 21:23:39.451922 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-dir\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.453040 master-0 kubenswrapper[38936]: I0216 21:23:39.452999 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-policies\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.453097 master-0 kubenswrapper[38936]: I0216 21:23:39.453074 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-serving-cert\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.453400 master-0 kubenswrapper[38936]: I0216 21:23:39.453368 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnn7h\" (UniqueName: \"kubernetes.io/projected/5faa07d8-7bad-46db-bd0e-6971fad3fd91-kube-api-access-jnn7h\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.555467 master-0 kubenswrapper[38936]: I0216 21:23:39.555234 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.555467 master-0 kubenswrapper[38936]: I0216 21:23:39.555301 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-error\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.555467 master-0 kubenswrapper[38936]: E0216 21:23:39.555376 38936 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:39.555467 master-0 kubenswrapper[38936]: E0216 21:23:39.555444 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig podName:5faa07d8-7bad-46db-bd0e-6971fad3fd91 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:40.055426591 +0000 UTC m=+50.407429953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig") pod "oauth-openshift-665f6ddd7f-ptvqr" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91") : configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:39.555467 master-0 kubenswrapper[38936]: I0216 21:23:39.555462 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.556057 master-0 kubenswrapper[38936]: I0216 21:23:39.555497 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.556057 master-0 kubenswrapper[38936]: I0216 21:23:39.555522 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-service-ca\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.556057 master-0 kubenswrapper[38936]: I0216 21:23:39.555551 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-router-certs\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.556057 master-0 kubenswrapper[38936]: I0216 21:23:39.555739 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-dir\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.556057 master-0 kubenswrapper[38936]: I0216 21:23:39.555810 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-policies\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.556057 master-0 kubenswrapper[38936]: I0216 21:23:39.555837 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-serving-cert\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.556057 master-0 kubenswrapper[38936]: E0216 21:23:39.555859 38936 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Feb 16 21:23:39.556057 master-0 kubenswrapper[38936]: I0216 21:23:39.555869 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnn7h\" (UniqueName: \"kubernetes.io/projected/5faa07d8-7bad-46db-bd0e-6971fad3fd91-kube-api-access-jnn7h\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.556057 master-0 kubenswrapper[38936]: I0216 21:23:39.555893 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.556057 master-0 kubenswrapper[38936]: E0216 21:23:39.555906 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session podName:5faa07d8-7bad-46db-bd0e-6971fad3fd91 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:40.055889483 +0000 UTC m=+50.407892865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session") pod "oauth-openshift-665f6ddd7f-ptvqr" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91") : secret "v4-0-config-system-session" not found Feb 16 21:23:39.556057 master-0 kubenswrapper[38936]: I0216 21:23:39.555931 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-login\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.557323 master-0 kubenswrapper[38936]: I0216 21:23:39.557275 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.557838 master-0 kubenswrapper[38936]: I0216 21:23:39.557797 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.557930 master-0 kubenswrapper[38936]: I0216 21:23:39.557910 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-dir\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.558178 master-0 kubenswrapper[38936]: I0216 21:23:39.558122 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-service-ca\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.558400 master-0 kubenswrapper[38936]: I0216 21:23:39.558365 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-policies\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.559664 master-0 kubenswrapper[38936]: I0216 21:23:39.559620 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-serving-cert\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.559851 master-0 kubenswrapper[38936]: I0216 21:23:39.559819 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-login\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.560472 master-0 kubenswrapper[38936]: I0216 21:23:39.560446 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-router-certs\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.562214 master-0 kubenswrapper[38936]: I0216 21:23:39.562173 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.562625 master-0 kubenswrapper[38936]: I0216 21:23:39.562465 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.565200 master-0 kubenswrapper[38936]: I0216 21:23:39.563666 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-error\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.575545 master-0 kubenswrapper[38936]: I0216 21:23:39.575498 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnn7h\" (UniqueName: \"kubernetes.io/projected/5faa07d8-7bad-46db-bd0e-6971fad3fd91-kube-api-access-jnn7h\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:39.887067 master-0 kubenswrapper[38936]: I0216 21:23:39.887011 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b26dae9694224e04f0cdc3841408c63" path="/var/lib/kubelet/pods/5b26dae9694224e04f0cdc3841408c63/volumes" Feb 16 21:23:39.887602 master-0 kubenswrapper[38936]: I0216 21:23:39.887377 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 16 21:23:39.906931 master-0 kubenswrapper[38936]: I0216 21:23:39.906877 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 21:23:39.907123 master-0 kubenswrapper[38936]: I0216 21:23:39.906941 38936 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="becfdef2-d782-47cd-895b-184205d8a5cf" Feb 16 21:23:39.909881 master-0 kubenswrapper[38936]: I0216 21:23:39.909832 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 21:23:39.909949 master-0 kubenswrapper[38936]: I0216 21:23:39.909876 38936 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="becfdef2-d782-47cd-895b-184205d8a5cf" Feb 16 21:23:40.065964 master-0 kubenswrapper[38936]: I0216 21:23:40.064819 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:40.065964 master-0 kubenswrapper[38936]: I0216 21:23:40.064933 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:40.065964 master-0 kubenswrapper[38936]: E0216 21:23:40.064955 38936 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:40.065964 master-0 kubenswrapper[38936]: E0216 21:23:40.065143 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig podName:5faa07d8-7bad-46db-bd0e-6971fad3fd91 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:41.065120316 +0000 UTC m=+51.417123688 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig") pod "oauth-openshift-665f6ddd7f-ptvqr" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91") : configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:40.065964 master-0 kubenswrapper[38936]: E0216 21:23:40.065170 38936 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Feb 16 21:23:40.065964 master-0 kubenswrapper[38936]: E0216 21:23:40.065283 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session podName:5faa07d8-7bad-46db-bd0e-6971fad3fd91 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:41.06525812 +0000 UTC m=+51.417261482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session") pod "oauth-openshift-665f6ddd7f-ptvqr" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91") : secret "v4-0-config-system-session" not found Feb 16 21:23:41.079052 master-0 kubenswrapper[38936]: I0216 21:23:41.078216 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:41.079052 master-0 kubenswrapper[38936]: I0216 21:23:41.078384 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:41.079052 master-0 kubenswrapper[38936]: E0216 21:23:41.078237 38936 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Feb 16 21:23:41.079052 master-0 kubenswrapper[38936]: E0216 21:23:41.078506 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session podName:5faa07d8-7bad-46db-bd0e-6971fad3fd91 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:43.078491897 +0000 UTC m=+53.430495259 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session") pod "oauth-openshift-665f6ddd7f-ptvqr" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91") : secret "v4-0-config-system-session" not found Feb 16 21:23:41.079052 master-0 kubenswrapper[38936]: E0216 21:23:41.078457 38936 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:41.081064 master-0 kubenswrapper[38936]: E0216 21:23:41.079584 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig podName:5faa07d8-7bad-46db-bd0e-6971fad3fd91 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:43.079550645 +0000 UTC m=+53.431554007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig") pod "oauth-openshift-665f6ddd7f-ptvqr" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91") : configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:42.320343 master-0 kubenswrapper[38936]: I0216 21:23:42.320272 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:42.320343 master-0 kubenswrapper[38936]: I0216 21:23:42.320327 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:23:42.321424 master-0 kubenswrapper[38936]: I0216 21:23:42.321368 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:23:42.321483 master-0 kubenswrapper[38936]: I0216 21:23:42.321431 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:23:42.402572 master-0 kubenswrapper[38936]: I0216 21:23:42.401842 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k"] Feb 16 21:23:42.403008 master-0 kubenswrapper[38936]: I0216 21:23:42.402899 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k" Feb 16 21:23:42.408488 master-0 kubenswrapper[38936]: I0216 21:23:42.408140 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 21:23:42.408488 master-0 kubenswrapper[38936]: I0216 21:23:42.408443 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-q7pxf" Feb 16 21:23:42.408777 master-0 kubenswrapper[38936]: I0216 21:23:42.408557 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 21:23:42.415233 master-0 kubenswrapper[38936]: I0216 21:23:42.415169 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k"] Feb 16 21:23:42.531762 master-0 kubenswrapper[38936]: I0216 21:23:42.531300 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/699f1399-dfd3-4633-9531-733cbba61307-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-bk22k\" (UID: \"699f1399-dfd3-4633-9531-733cbba61307\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k" Feb 16 21:23:42.531762 master-0 kubenswrapper[38936]: I0216 21:23:42.531427 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/699f1399-dfd3-4633-9531-733cbba61307-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-bk22k\" (UID: \"699f1399-dfd3-4633-9531-733cbba61307\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k" Feb 16 21:23:42.633416 master-0 kubenswrapper[38936]: I0216 21:23:42.633295 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/699f1399-dfd3-4633-9531-733cbba61307-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-bk22k\" (UID: \"699f1399-dfd3-4633-9531-733cbba61307\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k" Feb 16 21:23:42.633416 master-0 kubenswrapper[38936]: I0216 21:23:42.633342 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/699f1399-dfd3-4633-9531-733cbba61307-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-bk22k\" (UID: \"699f1399-dfd3-4633-9531-733cbba61307\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k" Feb 16 21:23:42.634564 master-0 kubenswrapper[38936]: I0216 21:23:42.634538 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/699f1399-dfd3-4633-9531-733cbba61307-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-bk22k\" (UID: \"699f1399-dfd3-4633-9531-733cbba61307\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k" Feb 16 21:23:42.647037 master-0 kubenswrapper[38936]: I0216 21:23:42.646994 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/699f1399-dfd3-4633-9531-733cbba61307-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-bk22k\" (UID: \"699f1399-dfd3-4633-9531-733cbba61307\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k" Feb 16 21:23:42.738161 master-0 kubenswrapper[38936]: I0216 21:23:42.737625 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k" Feb 16 21:23:43.142071 master-0 kubenswrapper[38936]: I0216 21:23:43.142018 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:43.142445 master-0 kubenswrapper[38936]: I0216 21:23:43.142084 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:43.142445 master-0 kubenswrapper[38936]: E0216 21:23:43.142256 38936 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:43.142445 master-0 kubenswrapper[38936]: E0216 21:23:43.142388 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig podName:5faa07d8-7bad-46db-bd0e-6971fad3fd91 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:47.14235863 +0000 UTC m=+57.494362202 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig") pod "oauth-openshift-665f6ddd7f-ptvqr" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91") : configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:43.146091 master-0 kubenswrapper[38936]: I0216 21:23:43.146034 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:43.361710 master-0 kubenswrapper[38936]: I0216 21:23:43.361553 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k"] Feb 16 21:23:43.681148 master-0 kubenswrapper[38936]: I0216 21:23:43.681032 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k" event={"ID":"699f1399-dfd3-4633-9531-733cbba61307","Type":"ContainerStarted","Data":"b85b8831a20d671b5f02c3fad66965b1fab05484534fc3abf37cb2351a7cd447"} Feb 16 21:23:45.709311 master-0 kubenswrapper[38936]: I0216 21:23:45.709183 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k" event={"ID":"699f1399-dfd3-4633-9531-733cbba61307","Type":"ContainerStarted","Data":"3728a087d848cc9ff4299e4def35ca8fac734fafa4953ffaf24815a477ad10cd"} Feb 16 21:23:45.731993 master-0 kubenswrapper[38936]: I0216 21:23:45.731898 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k" podStartSLOduration=1.765101467 podStartE2EDuration="3.731876145s" podCreationTimestamp="2026-02-16 21:23:42 +0000 UTC" firstStartedPulling="2026-02-16 21:23:43.373658884 +0000 UTC m=+53.725662246" lastFinishedPulling="2026-02-16 21:23:45.340433562 +0000 UTC m=+55.692436924" observedRunningTime="2026-02-16 21:23:45.729478211 +0000 UTC m=+56.081481583" watchObservedRunningTime="2026-02-16 21:23:45.731876145 +0000 UTC m=+56.083879537" Feb 16 21:23:46.352341 master-0 kubenswrapper[38936]: I0216 21:23:46.352258 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr"] Feb 16 21:23:46.355000 master-0 kubenswrapper[38936]: E0216 21:23:46.353176 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[v4-0-config-system-cliconfig], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" podUID="5faa07d8-7bad-46db-bd0e-6971fad3fd91" Feb 16 21:23:46.717126 master-0 kubenswrapper[38936]: I0216 21:23:46.716996 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:46.724990 master-0 kubenswrapper[38936]: I0216 21:23:46.724947 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:46.913772 master-0 kubenswrapper[38936]: I0216 21:23:46.913720 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-login\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.913963 master-0 kubenswrapper[38936]: I0216 21:23:46.913798 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-serving-cert\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.913963 master-0 kubenswrapper[38936]: I0216 21:23:46.913825 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-service-ca\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.914046 master-0 kubenswrapper[38936]: I0216 21:23:46.913955 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-provider-selection\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.914046 master-0 kubenswrapper[38936]: I0216 21:23:46.914005 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-dir\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.914106 master-0 kubenswrapper[38936]: I0216 21:23:46.914080 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:23:46.914138 master-0 kubenswrapper[38936]: I0216 21:23:46.914114 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-policies\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.914174 master-0 kubenswrapper[38936]: I0216 21:23:46.914142 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-error\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.914203 master-0 kubenswrapper[38936]: I0216 21:23:46.914190 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnn7h\" (UniqueName: \"kubernetes.io/projected/5faa07d8-7bad-46db-bd0e-6971fad3fd91-kube-api-access-jnn7h\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.914846 master-0 kubenswrapper[38936]: I0216 21:23:46.914241 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-trusted-ca-bundle\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.914846 master-0 kubenswrapper[38936]: I0216 21:23:46.914269 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-ocp-branding-template\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.914846 master-0 kubenswrapper[38936]: I0216 21:23:46.914303 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.914846 master-0 kubenswrapper[38936]: I0216 21:23:46.914335 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-router-certs\") pod \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " Feb 16 21:23:46.914846 master-0 kubenswrapper[38936]: I0216 21:23:46.914261 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:23:46.914846 master-0 kubenswrapper[38936]: I0216 21:23:46.914781 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:46.914846 master-0 kubenswrapper[38936]: I0216 21:23:46.914781 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:23:46.914846 master-0 kubenswrapper[38936]: I0216 21:23:46.914797 38936 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:46.915741 master-0 kubenswrapper[38936]: I0216 21:23:46.915708 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:23:46.917202 master-0 kubenswrapper[38936]: I0216 21:23:46.917174 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:23:46.917292 master-0 kubenswrapper[38936]: I0216 21:23:46.917234 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:23:46.917335 master-0 kubenswrapper[38936]: I0216 21:23:46.917314 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5faa07d8-7bad-46db-bd0e-6971fad3fd91-kube-api-access-jnn7h" (OuterVolumeSpecName: "kube-api-access-jnn7h") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "kube-api-access-jnn7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:23:46.917688 master-0 kubenswrapper[38936]: I0216 21:23:46.917612 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:23:46.917823 master-0 kubenswrapper[38936]: I0216 21:23:46.917792 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:23:46.918295 master-0 kubenswrapper[38936]: I0216 21:23:46.918255 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:23:46.918779 master-0 kubenswrapper[38936]: I0216 21:23:46.918726 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:23:46.919844 master-0 kubenswrapper[38936]: I0216 21:23:46.919787 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5faa07d8-7bad-46db-bd0e-6971fad3fd91" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:23:47.015916 master-0 kubenswrapper[38936]: I0216 21:23:47.015847 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:47.015916 master-0 kubenswrapper[38936]: I0216 21:23:47.015888 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:47.015916 master-0 kubenswrapper[38936]: I0216 21:23:47.015901 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:47.015916 master-0 kubenswrapper[38936]: I0216 21:23:47.015911 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:47.015916 master-0 kubenswrapper[38936]: I0216 21:23:47.015922 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:47.015916 master-0 kubenswrapper[38936]: I0216 21:23:47.015931 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:47.016292 master-0 kubenswrapper[38936]: I0216 21:23:47.015940 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:47.016292 master-0 kubenswrapper[38936]: I0216 21:23:47.015949 38936 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:47.016292 master-0 kubenswrapper[38936]: I0216 21:23:47.015960 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:47.016292 master-0 kubenswrapper[38936]: I0216 21:23:47.015968 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnn7h\" (UniqueName: \"kubernetes.io/projected/5faa07d8-7bad-46db-bd0e-6971fad3fd91-kube-api-access-jnn7h\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:47.108707 master-0 kubenswrapper[38936]: I0216 21:23:47.108179 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-84f5b46974-6pcrm"] Feb 16 21:23:47.156081 master-0 kubenswrapper[38936]: I0216 21:23:47.156012 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5dbf689d64-pgglg"] Feb 16 21:23:47.157046 master-0 kubenswrapper[38936]: I0216 21:23:47.157002 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.164754 master-0 kubenswrapper[38936]: I0216 21:23:47.164681 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dbf689d64-pgglg"] Feb 16 21:23:47.219092 master-0 kubenswrapper[38936]: I0216 21:23:47.219038 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig\") pod \"oauth-openshift-665f6ddd7f-ptvqr\" (UID: \"5faa07d8-7bad-46db-bd0e-6971fad3fd91\") " pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:47.219295 master-0 kubenswrapper[38936]: E0216 21:23:47.219154 38936 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:47.219295 master-0 kubenswrapper[38936]: E0216 21:23:47.219252 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig podName:5faa07d8-7bad-46db-bd0e-6971fad3fd91 nodeName:}" failed. No retries permitted until 2026-02-16 21:23:55.219229806 +0000 UTC m=+65.571233168 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig") pod "oauth-openshift-665f6ddd7f-ptvqr" (UID: "5faa07d8-7bad-46db-bd0e-6971fad3fd91") : configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:47.320128 master-0 kubenswrapper[38936]: I0216 21:23:47.319954 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-oauth-serving-cert\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.320128 master-0 kubenswrapper[38936]: I0216 21:23:47.320039 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-serving-cert\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.320481 master-0 kubenswrapper[38936]: I0216 21:23:47.320189 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5llvz\" (UniqueName: \"kubernetes.io/projected/55ec365e-5ef8-4291-9c01-7713bdd6f294-kube-api-access-5llvz\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.320481 master-0 kubenswrapper[38936]: I0216 21:23:47.320266 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-oauth-config\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.320481 master-0 kubenswrapper[38936]: I0216 21:23:47.320312 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-config\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.320481 master-0 kubenswrapper[38936]: I0216 21:23:47.320347 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-trusted-ca-bundle\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.320481 master-0 kubenswrapper[38936]: I0216 21:23:47.320415 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-service-ca\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.421329 master-0 kubenswrapper[38936]: I0216 21:23:47.421206 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5llvz\" (UniqueName: \"kubernetes.io/projected/55ec365e-5ef8-4291-9c01-7713bdd6f294-kube-api-access-5llvz\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.421329 master-0 kubenswrapper[38936]: I0216 21:23:47.421316 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-oauth-config\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.421329 master-0 kubenswrapper[38936]: I0216 21:23:47.421348 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-config\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.421329 master-0 kubenswrapper[38936]: I0216 21:23:47.421372 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-trusted-ca-bundle\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.422142 master-0 kubenswrapper[38936]: I0216 21:23:47.421422 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-service-ca\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.422142 master-0 kubenswrapper[38936]: I0216 21:23:47.421672 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-oauth-serving-cert\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.422142 master-0 kubenswrapper[38936]: I0216 21:23:47.421830 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-serving-cert\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.422777 master-0 kubenswrapper[38936]: I0216 21:23:47.422710 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-config\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.422910 master-0 kubenswrapper[38936]: I0216 21:23:47.422815 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-oauth-serving-cert\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.423309 master-0 kubenswrapper[38936]: I0216 21:23:47.423207 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-trusted-ca-bundle\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.424210 master-0 kubenswrapper[38936]: I0216 21:23:47.424138 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-service-ca\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.431498 master-0 kubenswrapper[38936]: I0216 21:23:47.431439 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-oauth-config\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.435983 master-0 kubenswrapper[38936]: I0216 21:23:47.435911 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-serving-cert\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.439775 master-0 kubenswrapper[38936]: I0216 21:23:47.439722 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5llvz\" (UniqueName: \"kubernetes.io/projected/55ec365e-5ef8-4291-9c01-7713bdd6f294-kube-api-access-5llvz\") pod \"console-5dbf689d64-pgglg\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.492293 master-0 kubenswrapper[38936]: I0216 21:23:47.492204 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:47.727316 master-0 kubenswrapper[38936]: I0216 21:23:47.727214 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:23:47.728619 master-0 kubenswrapper[38936]: E0216 21:23:47.727421 38936 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:47.728619 master-0 kubenswrapper[38936]: E0216 21:23:47.727462 38936 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:47.728619 master-0 kubenswrapper[38936]: E0216 21:23:47.727525 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access podName:1f8a26db-5a90-4da9-9074-33256ef17100 nodeName:}" failed. No retries permitted until 2026-02-16 21:24:19.727505954 +0000 UTC m=+90.079509316 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access") pod "installer-1-retry-1-master-0" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 21:23:47.730702 master-0 kubenswrapper[38936]: I0216 21:23:47.730644 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr" Feb 16 21:23:47.784953 master-0 kubenswrapper[38936]: I0216 21:23:47.784894 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5c88849d7d-xfnmp"] Feb 16 21:23:47.786209 master-0 kubenswrapper[38936]: I0216 21:23:47.786178 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.802348 master-0 kubenswrapper[38936]: I0216 21:23:47.802263 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 21:23:47.802348 master-0 kubenswrapper[38936]: I0216 21:23:47.802307 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 21:23:47.802348 master-0 kubenswrapper[38936]: I0216 21:23:47.802289 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 21:23:47.802584 master-0 kubenswrapper[38936]: I0216 21:23:47.802442 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 21:23:47.802584 master-0 kubenswrapper[38936]: I0216 21:23:47.802469 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-2t2pz" Feb 16 21:23:47.802824 master-0 kubenswrapper[38936]: I0216 21:23:47.802706 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 21:23:47.803875 master-0 kubenswrapper[38936]: I0216 21:23:47.803308 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 21:23:47.803875 master-0 kubenswrapper[38936]: I0216 21:23:47.803348 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 21:23:47.803875 master-0 kubenswrapper[38936]: I0216 21:23:47.803415 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 21:23:47.803875 master-0 kubenswrapper[38936]: I0216 21:23:47.803695 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 21:23:47.803875 master-0 kubenswrapper[38936]: I0216 21:23:47.803810 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 21:23:47.804753 master-0 kubenswrapper[38936]: I0216 21:23:47.804165 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 21:23:47.811576 master-0 kubenswrapper[38936]: I0216 21:23:47.808631 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr"] Feb 16 21:23:47.814215 master-0 kubenswrapper[38936]: I0216 21:23:47.813945 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 21:23:47.825908 master-0 kubenswrapper[38936]: I0216 21:23:47.825809 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5c88849d7d-xfnmp"] Feb 16 21:23:47.833506 master-0 kubenswrapper[38936]: I0216 21:23:47.833374 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr"] Feb 16 21:23:47.851374 master-0 kubenswrapper[38936]: I0216 21:23:47.851225 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 21:23:47.885553 master-0 kubenswrapper[38936]: I0216 21:23:47.885486 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5faa07d8-7bad-46db-bd0e-6971fad3fd91" path="/var/lib/kubelet/pods/5faa07d8-7bad-46db-bd0e-6971fad3fd91/volumes" Feb 16 21:23:47.937453 master-0 kubenswrapper[38936]: I0216 21:23:47.937328 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937453 master-0 kubenswrapper[38936]: I0216 21:23:47.937381 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-session\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937453 master-0 kubenswrapper[38936]: I0216 21:23:47.937413 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937705 master-0 kubenswrapper[38936]: I0216 21:23:47.937452 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-dir\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937705 master-0 kubenswrapper[38936]: I0216 21:23:47.937490 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937705 master-0 kubenswrapper[38936]: I0216 21:23:47.937578 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-error\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937705 master-0 kubenswrapper[38936]: I0216 21:23:47.937616 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-policies\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937705 master-0 kubenswrapper[38936]: I0216 21:23:47.937641 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937705 master-0 kubenswrapper[38936]: I0216 21:23:47.937670 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-login\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937705 master-0 kubenswrapper[38936]: I0216 21:23:47.937707 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937921 master-0 kubenswrapper[38936]: I0216 21:23:47.937745 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937921 master-0 kubenswrapper[38936]: I0216 21:23:47.937763 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtkxj\" (UniqueName: \"kubernetes.io/projected/46f9f317-a78e-4d18-b1c1-882631cfc6eb-kube-api-access-jtkxj\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937921 master-0 kubenswrapper[38936]: I0216 21:23:47.937784 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:47.937921 master-0 kubenswrapper[38936]: I0216 21:23:47.937831 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faa07d8-7bad-46db-bd0e-6971fad3fd91-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 16 21:23:48.038754 master-0 kubenswrapper[38936]: I0216 21:23:48.038677 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtkxj\" (UniqueName: \"kubernetes.io/projected/46f9f317-a78e-4d18-b1c1-882631cfc6eb-kube-api-access-jtkxj\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.038754 master-0 kubenswrapper[38936]: I0216 21:23:48.038728 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.038754 master-0 kubenswrapper[38936]: I0216 21:23:48.038751 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.038754 master-0 kubenswrapper[38936]: I0216 21:23:48.038771 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-session\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.039176 master-0 kubenswrapper[38936]: I0216 21:23:48.038982 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.039176 master-0 kubenswrapper[38936]: I0216 21:23:48.039045 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-dir\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.039176 master-0 kubenswrapper[38936]: I0216 21:23:48.039075 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.039276 master-0 kubenswrapper[38936]: I0216 21:23:48.039194 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-error\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.039276 master-0 kubenswrapper[38936]: I0216 21:23:48.039251 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-policies\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.039364 master-0 kubenswrapper[38936]: I0216 21:23:48.039305 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-login\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.039396 master-0 kubenswrapper[38936]: I0216 21:23:48.039383 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-dir\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.039614 master-0 kubenswrapper[38936]: I0216 21:23:48.039582 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.039750 master-0 kubenswrapper[38936]: I0216 21:23:48.039721 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.039799 master-0 kubenswrapper[38936]: I0216 21:23:48.039762 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.039893 master-0 kubenswrapper[38936]: E0216 21:23:48.039848 38936 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:48.039965 master-0 kubenswrapper[38936]: E0216 21:23:48.039941 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-cliconfig podName:46f9f317-a78e-4d18-b1c1-882631cfc6eb nodeName:}" failed. No retries permitted until 2026-02-16 21:23:48.539921247 +0000 UTC m=+58.891924609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-cliconfig") pod "oauth-openshift-5c88849d7d-xfnmp" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb") : configmap "v4-0-config-system-cliconfig" not found Feb 16 21:23:48.040311 master-0 kubenswrapper[38936]: I0216 21:23:48.040280 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-service-ca\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.040591 master-0 kubenswrapper[38936]: I0216 21:23:48.040554 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-policies\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.044667 master-0 kubenswrapper[38936]: I0216 21:23:48.042153 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-router-certs\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.044667 master-0 kubenswrapper[38936]: I0216 21:23:48.043628 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-session\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.045059 master-0 kubenswrapper[38936]: I0216 21:23:48.044998 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.046195 master-0 kubenswrapper[38936]: I0216 21:23:48.046146 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.048715 master-0 kubenswrapper[38936]: I0216 21:23:48.047149 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-login\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.050197 master-0 kubenswrapper[38936]: I0216 21:23:48.050170 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-error\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.050818 master-0 kubenswrapper[38936]: I0216 21:23:48.050793 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.053132 master-0 kubenswrapper[38936]: I0216 21:23:48.053098 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.064451 master-0 kubenswrapper[38936]: I0216 21:23:48.064396 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtkxj\" (UniqueName: \"kubernetes.io/projected/46f9f317-a78e-4d18-b1c1-882631cfc6eb-kube-api-access-jtkxj\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.075977 master-0 kubenswrapper[38936]: I0216 21:23:48.075929 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5dbf689d64-pgglg"] Feb 16 21:23:48.080518 master-0 kubenswrapper[38936]: W0216 21:23:48.080427 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55ec365e_5ef8_4291_9c01_7713bdd6f294.slice/crio-23ea2654dab91558abab5c98e19baa003e5825635fff827296e08581fedb6094 WatchSource:0}: Error finding container 23ea2654dab91558abab5c98e19baa003e5825635fff827296e08581fedb6094: Status 404 returned error can't find the container with id 23ea2654dab91558abab5c98e19baa003e5825635fff827296e08581fedb6094 Feb 16 21:23:48.548496 master-0 kubenswrapper[38936]: I0216 21:23:48.548410 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.549458 master-0 kubenswrapper[38936]: I0216 21:23:48.549419 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5c88849d7d-xfnmp\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.733172 master-0 kubenswrapper[38936]: I0216 21:23:48.733100 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:23:48.741066 master-0 kubenswrapper[38936]: I0216 21:23:48.740961 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dbf689d64-pgglg" event={"ID":"55ec365e-5ef8-4291-9c01-7713bdd6f294","Type":"ContainerStarted","Data":"c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f"} Feb 16 21:23:48.741066 master-0 kubenswrapper[38936]: I0216 21:23:48.741054 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dbf689d64-pgglg" event={"ID":"55ec365e-5ef8-4291-9c01-7713bdd6f294","Type":"ContainerStarted","Data":"23ea2654dab91558abab5c98e19baa003e5825635fff827296e08581fedb6094"} Feb 16 21:23:48.770566 master-0 kubenswrapper[38936]: I0216 21:23:48.770470 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5dbf689d64-pgglg" podStartSLOduration=1.770447865 podStartE2EDuration="1.770447865s" podCreationTimestamp="2026-02-16 21:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:23:48.766975573 +0000 UTC m=+59.118978935" watchObservedRunningTime="2026-02-16 21:23:48.770447865 +0000 UTC m=+59.122451247" Feb 16 21:23:49.307978 master-0 kubenswrapper[38936]: I0216 21:23:49.307910 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5c88849d7d-xfnmp"] Feb 16 21:23:49.314818 master-0 kubenswrapper[38936]: W0216 21:23:49.314702 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f9f317_a78e_4d18_b1c1_882631cfc6eb.slice/crio-45b3be3fe305d9669dd5870c24af6ff6fd509c22355554d98f17b1456597bad2 WatchSource:0}: Error finding container 45b3be3fe305d9669dd5870c24af6ff6fd509c22355554d98f17b1456597bad2: Status 404 returned error can't find the container with id 45b3be3fe305d9669dd5870c24af6ff6fd509c22355554d98f17b1456597bad2 Feb 16 21:23:49.750980 master-0 kubenswrapper[38936]: I0216 21:23:49.750934 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" event={"ID":"46f9f317-a78e-4d18-b1c1-882631cfc6eb","Type":"ContainerStarted","Data":"45b3be3fe305d9669dd5870c24af6ff6fd509c22355554d98f17b1456597bad2"} Feb 16 21:23:49.815263 master-0 kubenswrapper[38936]: I0216 21:23:49.815173 38936 scope.go:117] "RemoveContainer" containerID="6dfa6b8d2b84acd49a7559619cbb2034fe2294937bd8d4e0f86679d02bd2078a" Feb 16 21:23:50.498594 master-0 kubenswrapper[38936]: I0216 21:23:50.498508 38936 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 21:23:50.498935 master-0 kubenswrapper[38936]: I0216 21:23:50.498871 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" containerID="cri-o://6435ebb5f02081a9ce4ce936a293eb7bb3bd2de40c50e78a8a1e337141307f75" gracePeriod=30 Feb 16 21:23:50.499020 master-0 kubenswrapper[38936]: I0216 21:23:50.498918 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-cert-syncer" containerID="cri-o://d608a5d9652a3c6ba32e1dcd56710fee04c37ee22144db45ecd5fe5c524c9a31" gracePeriod=30 Feb 16 21:23:50.499020 master-0 kubenswrapper[38936]: I0216 21:23:50.498915 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-recovery-controller" containerID="cri-o://3ebe9b7d8ce03b2c6ab5c8d3215470f47595c89ae74952d5865ce15e1874a8ee" gracePeriod=30 Feb 16 21:23:50.501855 master-0 kubenswrapper[38936]: I0216 21:23:50.501606 38936 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 21:23:50.502675 master-0 kubenswrapper[38936]: E0216 21:23:50.502279 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-cert-syncer" Feb 16 21:23:50.502675 master-0 kubenswrapper[38936]: I0216 21:23:50.502304 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-cert-syncer" Feb 16 21:23:50.502675 master-0 kubenswrapper[38936]: E0216 21:23:50.502327 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-recovery-controller" Feb 16 21:23:50.502675 master-0 kubenswrapper[38936]: I0216 21:23:50.502360 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-recovery-controller" Feb 16 21:23:50.502675 master-0 kubenswrapper[38936]: E0216 21:23:50.502377 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" Feb 16 21:23:50.502675 master-0 kubenswrapper[38936]: I0216 21:23:50.502388 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" Feb 16 21:23:50.502675 master-0 kubenswrapper[38936]: E0216 21:23:50.502400 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="wait-for-host-port" Feb 16 21:23:50.502675 master-0 kubenswrapper[38936]: I0216 21:23:50.502416 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="wait-for-host-port" Feb 16 21:23:50.504803 master-0 kubenswrapper[38936]: I0216 21:23:50.502717 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="wait-for-host-port" Feb 16 21:23:50.504803 master-0 kubenswrapper[38936]: I0216 21:23:50.502779 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-recovery-controller" Feb 16 21:23:50.504803 master-0 kubenswrapper[38936]: I0216 21:23:50.502796 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" Feb 16 21:23:50.504803 master-0 kubenswrapper[38936]: I0216 21:23:50.502851 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-cert-syncer" Feb 16 21:23:50.579758 master-0 kubenswrapper[38936]: I0216 21:23:50.579691 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:50.580129 master-0 kubenswrapper[38936]: I0216 21:23:50.579920 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:50.681463 master-0 kubenswrapper[38936]: I0216 21:23:50.681299 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:50.681964 master-0 kubenswrapper[38936]: I0216 21:23:50.681485 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:50.681964 master-0 kubenswrapper[38936]: I0216 21:23:50.681540 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:50.681964 master-0 kubenswrapper[38936]: I0216 21:23:50.681630 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:23:50.763418 master-0 kubenswrapper[38936]: I0216 21:23:50.763277 38936 generic.go:334] "Generic (PLEG): container finished" podID="5b69c32d-3b8d-44d6-8547-9e682d069266" containerID="78d192bb958fe17b5046a85c27f8d4b6856a2f491dd66e1f66156a74c8c8a8c3" exitCode=0 Feb 16 21:23:50.763418 master-0 kubenswrapper[38936]: I0216 21:23:50.763369 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5b69c32d-3b8d-44d6-8547-9e682d069266","Type":"ContainerDied","Data":"78d192bb958fe17b5046a85c27f8d4b6856a2f491dd66e1f66156a74c8c8a8c3"} Feb 16 21:23:50.768825 master-0 kubenswrapper[38936]: I0216 21:23:50.768770 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler-cert-syncer/0.log" Feb 16 21:23:50.769774 master-0 kubenswrapper[38936]: I0216 21:23:50.769722 38936 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="3ebe9b7d8ce03b2c6ab5c8d3215470f47595c89ae74952d5865ce15e1874a8ee" exitCode=0 Feb 16 21:23:50.769774 master-0 kubenswrapper[38936]: I0216 21:23:50.769769 38936 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="d608a5d9652a3c6ba32e1dcd56710fee04c37ee22144db45ecd5fe5c524c9a31" exitCode=2 Feb 16 21:23:50.769914 master-0 kubenswrapper[38936]: I0216 21:23:50.769785 38936 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="6435ebb5f02081a9ce4ce936a293eb7bb3bd2de40c50e78a8a1e337141307f75" exitCode=0 Feb 16 21:23:50.784182 master-0 kubenswrapper[38936]: I0216 21:23:50.784058 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="b8fa563c7331931f00ce0006e522f0f1" podUID="952766c3a88fd12345a552f1277199f9" Feb 16 21:23:52.320931 master-0 kubenswrapper[38936]: I0216 21:23:52.320855 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:23:52.321530 master-0 kubenswrapper[38936]: I0216 21:23:52.320935 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:23:55.401470 master-0 kubenswrapper[38936]: I0216 21:23:55.401200 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 21:23:55.403200 master-0 kubenswrapper[38936]: I0216 21:23:55.402111 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:23:55.406720 master-0 kubenswrapper[38936]: I0216 21:23:55.406207 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-fzgzx" Feb 16 21:23:55.406720 master-0 kubenswrapper[38936]: I0216 21:23:55.406668 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 21:23:55.421136 master-0 kubenswrapper[38936]: I0216 21:23:55.421073 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 21:23:55.566314 master-0 kubenswrapper[38936]: I0216 21:23:55.566257 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:23:55.566314 master-0 kubenswrapper[38936]: I0216 21:23:55.566305 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d730a9d3-3e5b-4676-a707-8d2ed41502be-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:23:55.566314 master-0 kubenswrapper[38936]: I0216 21:23:55.566327 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-var-lock\") pod \"installer-2-master-0\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:23:55.668001 master-0 kubenswrapper[38936]: I0216 21:23:55.667871 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:23:55.668001 master-0 kubenswrapper[38936]: I0216 21:23:55.667938 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d730a9d3-3e5b-4676-a707-8d2ed41502be-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:23:55.668001 master-0 kubenswrapper[38936]: I0216 21:23:55.667969 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-var-lock\") pod \"installer-2-master-0\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:23:55.668348 master-0 kubenswrapper[38936]: I0216 21:23:55.668036 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:23:55.668348 master-0 kubenswrapper[38936]: I0216 21:23:55.668069 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-var-lock\") pod \"installer-2-master-0\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:23:55.687434 master-0 kubenswrapper[38936]: I0216 21:23:55.687379 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d730a9d3-3e5b-4676-a707-8d2ed41502be-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:23:55.729252 master-0 kubenswrapper[38936]: I0216 21:23:55.729196 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:23:57.493059 master-0 kubenswrapper[38936]: I0216 21:23:57.492946 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:57.494853 master-0 kubenswrapper[38936]: I0216 21:23:57.494823 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:23:57.496815 master-0 kubenswrapper[38936]: I0216 21:23:57.496740 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:23:57.496927 master-0 kubenswrapper[38936]: I0216 21:23:57.496887 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:23:59.425818 master-0 kubenswrapper[38936]: I0216 21:23:59.425755 38936 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:23:59.426979 master-0 kubenswrapper[38936]: I0216 21:23:59.426099 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager" containerID="cri-o://bdfde90f893f521a930ff809d7a19e8600359a70b3e19bbbef0735c23b65d26d" gracePeriod=30 Feb 16 21:23:59.426979 master-0 kubenswrapper[38936]: I0216 21:23:59.426118 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://bd383c7f3493b77aa39a71f0c59c6ca2af1cb84a3dcd17da7deffd0c9f13279e" gracePeriod=30 Feb 16 21:23:59.426979 master-0 kubenswrapper[38936]: I0216 21:23:59.426118 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://0a662b88d01e2a6c7840550eedccdbaad4f0955066a41fc813a25bc7970213e5" gracePeriod=30 Feb 16 21:23:59.426979 master-0 kubenswrapper[38936]: I0216 21:23:59.426136 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="cluster-policy-controller" containerID="cri-o://93e4248b433133e3c151d7b3b51df468e545cf503f72fd69fa418801f9123776" gracePeriod=30 Feb 16 21:23:59.427803 master-0 kubenswrapper[38936]: I0216 21:23:59.427144 38936 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:23:59.427803 master-0 kubenswrapper[38936]: E0216 21:23:59.427686 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager" Feb 16 21:23:59.427803 master-0 kubenswrapper[38936]: I0216 21:23:59.427712 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager" Feb 16 21:23:59.427803 master-0 kubenswrapper[38936]: E0216 21:23:59.427733 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager-recovery-controller" Feb 16 21:23:59.427803 master-0 kubenswrapper[38936]: I0216 21:23:59.427744 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager-recovery-controller" Feb 16 21:23:59.427803 master-0 kubenswrapper[38936]: E0216 21:23:59.427763 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager-cert-syncer" Feb 16 21:23:59.427803 master-0 kubenswrapper[38936]: I0216 21:23:59.427775 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager-cert-syncer" Feb 16 21:23:59.427803 master-0 kubenswrapper[38936]: E0216 21:23:59.427810 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="cluster-policy-controller" Feb 16 21:23:59.428200 master-0 kubenswrapper[38936]: I0216 21:23:59.427821 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="cluster-policy-controller" Feb 16 21:23:59.428200 master-0 kubenswrapper[38936]: I0216 21:23:59.428043 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager-recovery-controller" Feb 16 21:23:59.428200 master-0 kubenswrapper[38936]: I0216 21:23:59.428063 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="cluster-policy-controller" Feb 16 21:23:59.428200 master-0 kubenswrapper[38936]: I0216 21:23:59.428084 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager-cert-syncer" Feb 16 21:23:59.428200 master-0 kubenswrapper[38936]: I0216 21:23:59.428100 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ee9e35c766aea904898f2e9f2ffaca" containerName="kube-controller-manager" Feb 16 21:23:59.530076 master-0 kubenswrapper[38936]: I0216 21:23:59.530011 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"fc19ea17c4f595b135412c661d90b9a7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:59.530220 master-0 kubenswrapper[38936]: I0216 21:23:59.530103 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"fc19ea17c4f595b135412c661d90b9a7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:59.631770 master-0 kubenswrapper[38936]: I0216 21:23:59.631724 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"fc19ea17c4f595b135412c661d90b9a7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:59.631878 master-0 kubenswrapper[38936]: I0216 21:23:59.631805 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"fc19ea17c4f595b135412c661d90b9a7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:59.631920 master-0 kubenswrapper[38936]: I0216 21:23:59.631862 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"fc19ea17c4f595b135412c661d90b9a7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:59.632012 master-0 kubenswrapper[38936]: I0216 21:23:59.631965 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"fc19ea17c4f595b135412c661d90b9a7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:23:59.848559 master-0 kubenswrapper[38936]: I0216 21:23:59.848459 38936 generic.go:334] "Generic (PLEG): container finished" podID="ff084640-8e23-45e8-9d0b-6aa3b030c51f" containerID="df15826c9f58eefd45a84d42353553de2b39c005890ff3061c9c0aea9f1e2f96" exitCode=0 Feb 16 21:23:59.848559 master-0 kubenswrapper[38936]: I0216 21:23:59.848530 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"ff084640-8e23-45e8-9d0b-6aa3b030c51f","Type":"ContainerDied","Data":"df15826c9f58eefd45a84d42353553de2b39c005890ff3061c9c0aea9f1e2f96"} Feb 16 21:23:59.851720 master-0 kubenswrapper[38936]: I0216 21:23:59.851633 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_72ee9e35c766aea904898f2e9f2ffaca/kube-controller-manager-cert-syncer/0.log" Feb 16 21:23:59.852205 master-0 kubenswrapper[38936]: I0216 21:23:59.852163 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="72ee9e35c766aea904898f2e9f2ffaca" podUID="fc19ea17c4f595b135412c661d90b9a7" Feb 16 21:23:59.852640 master-0 kubenswrapper[38936]: I0216 21:23:59.852598 38936 generic.go:334] "Generic (PLEG): container finished" podID="72ee9e35c766aea904898f2e9f2ffaca" containerID="0a662b88d01e2a6c7840550eedccdbaad4f0955066a41fc813a25bc7970213e5" exitCode=0 Feb 16 21:23:59.852640 master-0 kubenswrapper[38936]: I0216 21:23:59.852628 38936 generic.go:334] "Generic (PLEG): container finished" podID="72ee9e35c766aea904898f2e9f2ffaca" containerID="bd383c7f3493b77aa39a71f0c59c6ca2af1cb84a3dcd17da7deffd0c9f13279e" exitCode=2 Feb 16 21:23:59.852771 master-0 kubenswrapper[38936]: I0216 21:23:59.852642 38936 generic.go:334] "Generic (PLEG): container finished" podID="72ee9e35c766aea904898f2e9f2ffaca" containerID="93e4248b433133e3c151d7b3b51df468e545cf503f72fd69fa418801f9123776" exitCode=0 Feb 16 21:23:59.852771 master-0 kubenswrapper[38936]: I0216 21:23:59.852672 38936 generic.go:334] "Generic (PLEG): container finished" podID="72ee9e35c766aea904898f2e9f2ffaca" containerID="bdfde90f893f521a930ff809d7a19e8600359a70b3e19bbbef0735c23b65d26d" exitCode=0 Feb 16 21:24:02.320739 master-0 kubenswrapper[38936]: I0216 21:24:02.320626 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:24:02.322020 master-0 kubenswrapper[38936]: I0216 21:24:02.321893 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:24:05.213847 master-0 kubenswrapper[38936]: I0216 21:24:05.212564 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 21:24:06.912574 master-0 kubenswrapper[38936]: I0216 21:24:06.912460 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:24:06.916939 master-0 kubenswrapper[38936]: I0216 21:24:06.916860 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"ff084640-8e23-45e8-9d0b-6aa3b030c51f","Type":"ContainerDied","Data":"03eb6bd070b6a14b651c7be05b265195ef87827bc61bc2b2e1baa12783467bea"} Feb 16 21:24:06.917024 master-0 kubenswrapper[38936]: I0216 21:24:06.916951 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03eb6bd070b6a14b651c7be05b265195ef87827bc61bc2b2e1baa12783467bea" Feb 16 21:24:06.917024 master-0 kubenswrapper[38936]: I0216 21:24:06.916948 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 21:24:07.045886 master-0 kubenswrapper[38936]: I0216 21:24:07.045808 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kubelet-dir\") pod \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " Feb 16 21:24:07.046037 master-0 kubenswrapper[38936]: I0216 21:24:07.046021 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kube-api-access\") pod \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " Feb 16 21:24:07.046189 master-0 kubenswrapper[38936]: I0216 21:24:07.046108 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ff084640-8e23-45e8-9d0b-6aa3b030c51f" (UID: "ff084640-8e23-45e8-9d0b-6aa3b030c51f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:07.046240 master-0 kubenswrapper[38936]: I0216 21:24:07.046194 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-var-lock\") pod \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\" (UID: \"ff084640-8e23-45e8-9d0b-6aa3b030c51f\") " Feb 16 21:24:07.046281 master-0 kubenswrapper[38936]: I0216 21:24:07.046231 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-var-lock" (OuterVolumeSpecName: "var-lock") pod "ff084640-8e23-45e8-9d0b-6aa3b030c51f" (UID: "ff084640-8e23-45e8-9d0b-6aa3b030c51f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:07.046788 master-0 kubenswrapper[38936]: I0216 21:24:07.046746 38936 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:07.046788 master-0 kubenswrapper[38936]: I0216 21:24:07.046778 38936 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:07.050194 master-0 kubenswrapper[38936]: I0216 21:24:07.050088 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ff084640-8e23-45e8-9d0b-6aa3b030c51f" (UID: "ff084640-8e23-45e8-9d0b-6aa3b030c51f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:24:07.148412 master-0 kubenswrapper[38936]: I0216 21:24:07.148343 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff084640-8e23-45e8-9d0b-6aa3b030c51f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:07.494689 master-0 kubenswrapper[38936]: I0216 21:24:07.494532 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:24:07.494689 master-0 kubenswrapper[38936]: I0216 21:24:07.494603 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:24:08.196484 master-0 kubenswrapper[38936]: I0216 21:24:08.196431 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:24:08.202817 master-0 kubenswrapper[38936]: I0216 21:24:08.202491 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler-cert-syncer/0.log" Feb 16 21:24:08.203916 master-0 kubenswrapper[38936]: I0216 21:24:08.203844 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:24:08.221722 master-0 kubenswrapper[38936]: I0216 21:24:08.221636 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="b8fa563c7331931f00ce0006e522f0f1" podUID="952766c3a88fd12345a552f1277199f9" Feb 16 21:24:08.270532 master-0 kubenswrapper[38936]: I0216 21:24:08.270444 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b69c32d-3b8d-44d6-8547-9e682d069266-kube-api-access\") pod \"5b69c32d-3b8d-44d6-8547-9e682d069266\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " Feb 16 21:24:08.270532 master-0 kubenswrapper[38936]: I0216 21:24:08.270534 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-kubelet-dir\") pod \"5b69c32d-3b8d-44d6-8547-9e682d069266\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " Feb 16 21:24:08.270947 master-0 kubenswrapper[38936]: I0216 21:24:08.270914 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"b8fa563c7331931f00ce0006e522f0f1\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " Feb 16 21:24:08.271001 master-0 kubenswrapper[38936]: I0216 21:24:08.270956 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"b8fa563c7331931f00ce0006e522f0f1\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " Feb 16 21:24:08.271037 master-0 kubenswrapper[38936]: I0216 21:24:08.271026 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-var-lock\") pod \"5b69c32d-3b8d-44d6-8547-9e682d069266\" (UID: \"5b69c32d-3b8d-44d6-8547-9e682d069266\") " Feb 16 21:24:08.271158 master-0 kubenswrapper[38936]: I0216 21:24:08.271110 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "b8fa563c7331931f00ce0006e522f0f1" (UID: "b8fa563c7331931f00ce0006e522f0f1"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:08.271197 master-0 kubenswrapper[38936]: I0216 21:24:08.271172 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5b69c32d-3b8d-44d6-8547-9e682d069266" (UID: "5b69c32d-3b8d-44d6-8547-9e682d069266"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:08.271230 master-0 kubenswrapper[38936]: I0216 21:24:08.271195 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b8fa563c7331931f00ce0006e522f0f1" (UID: "b8fa563c7331931f00ce0006e522f0f1"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:08.271333 master-0 kubenswrapper[38936]: I0216 21:24:08.271302 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-var-lock" (OuterVolumeSpecName: "var-lock") pod "5b69c32d-3b8d-44d6-8547-9e682d069266" (UID: "5b69c32d-3b8d-44d6-8547-9e682d069266"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:08.271867 master-0 kubenswrapper[38936]: I0216 21:24:08.271837 38936 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:08.271910 master-0 kubenswrapper[38936]: I0216 21:24:08.271865 38936 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b69c32d-3b8d-44d6-8547-9e682d069266-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:08.271910 master-0 kubenswrapper[38936]: I0216 21:24:08.271879 38936 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:08.271910 master-0 kubenswrapper[38936]: I0216 21:24:08.271891 38936 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:08.273935 master-0 kubenswrapper[38936]: I0216 21:24:08.273897 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_72ee9e35c766aea904898f2e9f2ffaca/kube-controller-manager-cert-syncer/0.log" Feb 16 21:24:08.274268 master-0 kubenswrapper[38936]: I0216 21:24:08.274215 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b69c32d-3b8d-44d6-8547-9e682d069266-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5b69c32d-3b8d-44d6-8547-9e682d069266" (UID: "5b69c32d-3b8d-44d6-8547-9e682d069266"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:24:08.274964 master-0 kubenswrapper[38936]: I0216 21:24:08.274939 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:08.278842 master-0 kubenswrapper[38936]: I0216 21:24:08.278807 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="72ee9e35c766aea904898f2e9f2ffaca" podUID="fc19ea17c4f595b135412c661d90b9a7" Feb 16 21:24:08.373060 master-0 kubenswrapper[38936]: I0216 21:24:08.372983 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-resource-dir\") pod \"72ee9e35c766aea904898f2e9f2ffaca\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " Feb 16 21:24:08.373060 master-0 kubenswrapper[38936]: I0216 21:24:08.373046 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-cert-dir\") pod \"72ee9e35c766aea904898f2e9f2ffaca\" (UID: \"72ee9e35c766aea904898f2e9f2ffaca\") " Feb 16 21:24:08.373397 master-0 kubenswrapper[38936]: I0216 21:24:08.373268 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "72ee9e35c766aea904898f2e9f2ffaca" (UID: "72ee9e35c766aea904898f2e9f2ffaca"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:08.373397 master-0 kubenswrapper[38936]: I0216 21:24:08.373333 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5b69c32d-3b8d-44d6-8547-9e682d069266-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:08.373525 master-0 kubenswrapper[38936]: I0216 21:24:08.373400 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "72ee9e35c766aea904898f2e9f2ffaca" (UID: "72ee9e35c766aea904898f2e9f2ffaca"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:08.479250 master-0 kubenswrapper[38936]: I0216 21:24:08.478121 38936 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:08.479250 master-0 kubenswrapper[38936]: I0216 21:24:08.478160 38936 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/72ee9e35c766aea904898f2e9f2ffaca-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:08.622946 master-0 kubenswrapper[38936]: I0216 21:24:08.622881 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 21:24:08.626938 master-0 kubenswrapper[38936]: W0216 21:24:08.626888 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd730a9d3_3e5b_4676_a707_8d2ed41502be.slice/crio-4fa54033dcbe59cfe4901d6785a100b91aa2a655a919634b4b4ff49dbf42b27b WatchSource:0}: Error finding container 4fa54033dcbe59cfe4901d6785a100b91aa2a655a919634b4b4ff49dbf42b27b: Status 404 returned error can't find the container with id 4fa54033dcbe59cfe4901d6785a100b91aa2a655a919634b4b4ff49dbf42b27b Feb 16 21:24:08.943182 master-0 kubenswrapper[38936]: I0216 21:24:08.943080 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler-cert-syncer/0.log" Feb 16 21:24:08.944466 master-0 kubenswrapper[38936]: I0216 21:24:08.944206 38936 scope.go:117] "RemoveContainer" containerID="3ebe9b7d8ce03b2c6ab5c8d3215470f47595c89ae74952d5865ce15e1874a8ee" Feb 16 21:24:08.944466 master-0 kubenswrapper[38936]: I0216 21:24:08.944223 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:24:08.946228 master-0 kubenswrapper[38936]: I0216 21:24:08.946204 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_72ee9e35c766aea904898f2e9f2ffaca/kube-controller-manager-cert-syncer/0.log" Feb 16 21:24:08.947677 master-0 kubenswrapper[38936]: I0216 21:24:08.947468 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="b8fa563c7331931f00ce0006e522f0f1" podUID="952766c3a88fd12345a552f1277199f9" Feb 16 21:24:08.947677 master-0 kubenswrapper[38936]: I0216 21:24:08.947640 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:08.950176 master-0 kubenswrapper[38936]: I0216 21:24:08.950125 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="72ee9e35c766aea904898f2e9f2ffaca" podUID="fc19ea17c4f595b135412c661d90b9a7" Feb 16 21:24:08.951205 master-0 kubenswrapper[38936]: I0216 21:24:08.950462 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" event={"ID":"46f9f317-a78e-4d18-b1c1-882631cfc6eb","Type":"ContainerStarted","Data":"e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642"} Feb 16 21:24:08.951205 master-0 kubenswrapper[38936]: I0216 21:24:08.950889 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:24:08.954708 master-0 kubenswrapper[38936]: I0216 21:24:08.953287 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5b69c32d-3b8d-44d6-8547-9e682d069266","Type":"ContainerDied","Data":"411a960f6aeb9c455e10a191147d5299c945d4a1bf89b19258cad3d8ada4d280"} Feb 16 21:24:08.954708 master-0 kubenswrapper[38936]: I0216 21:24:08.953323 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="411a960f6aeb9c455e10a191147d5299c945d4a1bf89b19258cad3d8ada4d280" Feb 16 21:24:08.954708 master-0 kubenswrapper[38936]: I0216 21:24:08.953380 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 21:24:08.957205 master-0 kubenswrapper[38936]: I0216 21:24:08.957106 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"d730a9d3-3e5b-4676-a707-8d2ed41502be","Type":"ContainerStarted","Data":"4fa54033dcbe59cfe4901d6785a100b91aa2a655a919634b4b4ff49dbf42b27b"} Feb 16 21:24:08.957205 master-0 kubenswrapper[38936]: I0216 21:24:08.957127 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="d730a9d3-3e5b-4676-a707-8d2ed41502be" containerName="installer" containerID="cri-o://9b3b1a3ef50c5958a54eb15cef494f2d653137ad8b30b4100b19790bcf34ae26" gracePeriod=30 Feb 16 21:24:08.957205 master-0 kubenswrapper[38936]: I0216 21:24:08.957178 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:24:08.959243 master-0 kubenswrapper[38936]: I0216 21:24:08.959174 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-xzx78" event={"ID":"42c29d0d-12cf-4737-83ed-1dcfe74b2b26","Type":"ContainerStarted","Data":"0fe6df6a86de8bc2cc983805e86bbd646bd23b7858d68860d798331258b8757e"} Feb 16 21:24:08.959512 master-0 kubenswrapper[38936]: I0216 21:24:08.959477 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-dcd7b7d95-xzx78" Feb 16 21:24:08.961630 master-0 kubenswrapper[38936]: I0216 21:24:08.961589 38936 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-xzx78 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Feb 16 21:24:08.961762 master-0 kubenswrapper[38936]: I0216 21:24:08.961681 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-xzx78" podUID="42c29d0d-12cf-4737-83ed-1dcfe74b2b26" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Feb 16 21:24:08.964769 master-0 kubenswrapper[38936]: I0216 21:24:08.964631 38936 scope.go:117] "RemoveContainer" containerID="d608a5d9652a3c6ba32e1dcd56710fee04c37ee22144db45ecd5fe5c524c9a31" Feb 16 21:24:08.980476 master-0 kubenswrapper[38936]: I0216 21:24:08.979924 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" podStartSLOduration=4.131943882 podStartE2EDuration="22.979895277s" podCreationTimestamp="2026-02-16 21:23:46 +0000 UTC" firstStartedPulling="2026-02-16 21:23:49.316263747 +0000 UTC m=+59.668267109" lastFinishedPulling="2026-02-16 21:24:08.164215142 +0000 UTC m=+78.516218504" observedRunningTime="2026-02-16 21:24:08.978069359 +0000 UTC m=+79.330072741" watchObservedRunningTime="2026-02-16 21:24:08.979895277 +0000 UTC m=+79.331898639" Feb 16 21:24:08.989528 master-0 kubenswrapper[38936]: I0216 21:24:08.989484 38936 scope.go:117] "RemoveContainer" containerID="6435ebb5f02081a9ce4ce936a293eb7bb3bd2de40c50e78a8a1e337141307f75" Feb 16 21:24:09.001838 master-0 kubenswrapper[38936]: I0216 21:24:09.001706 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=14.00168499 podStartE2EDuration="14.00168499s" podCreationTimestamp="2026-02-16 21:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:24:08.996217724 +0000 UTC m=+79.348221096" watchObservedRunningTime="2026-02-16 21:24:09.00168499 +0000 UTC m=+79.353688352" Feb 16 21:24:09.009892 master-0 kubenswrapper[38936]: I0216 21:24:09.009839 38936 scope.go:117] "RemoveContainer" containerID="432794b20c117ef5563701790110e26447eca7921c053c44497fb8bd396c6901" Feb 16 21:24:09.027153 master-0 kubenswrapper[38936]: I0216 21:24:09.026861 38936 scope.go:117] "RemoveContainer" containerID="0a662b88d01e2a6c7840550eedccdbaad4f0955066a41fc813a25bc7970213e5" Feb 16 21:24:09.038361 master-0 kubenswrapper[38936]: I0216 21:24:09.038223 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="b8fa563c7331931f00ce0006e522f0f1" podUID="952766c3a88fd12345a552f1277199f9" Feb 16 21:24:09.040754 master-0 kubenswrapper[38936]: I0216 21:24:09.040604 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="72ee9e35c766aea904898f2e9f2ffaca" podUID="fc19ea17c4f595b135412c661d90b9a7" Feb 16 21:24:09.040907 master-0 kubenswrapper[38936]: I0216 21:24:09.040846 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-dcd7b7d95-xzx78" podStartSLOduration=2.376721556 podStartE2EDuration="48.040822157s" podCreationTimestamp="2026-02-16 21:23:21 +0000 UTC" firstStartedPulling="2026-02-16 21:23:22.626632363 +0000 UTC m=+32.978635725" lastFinishedPulling="2026-02-16 21:24:08.290732944 +0000 UTC m=+78.642736326" observedRunningTime="2026-02-16 21:24:09.036197932 +0000 UTC m=+79.388201304" watchObservedRunningTime="2026-02-16 21:24:09.040822157 +0000 UTC m=+79.392825519" Feb 16 21:24:09.045469 master-0 kubenswrapper[38936]: I0216 21:24:09.045438 38936 scope.go:117] "RemoveContainer" containerID="bd383c7f3493b77aa39a71f0c59c6ca2af1cb84a3dcd17da7deffd0c9f13279e" Feb 16 21:24:09.059170 master-0 kubenswrapper[38936]: I0216 21:24:09.059144 38936 scope.go:117] "RemoveContainer" containerID="93e4248b433133e3c151d7b3b51df468e545cf503f72fd69fa418801f9123776" Feb 16 21:24:09.072847 master-0 kubenswrapper[38936]: I0216 21:24:09.072815 38936 scope.go:117] "RemoveContainer" containerID="bdfde90f893f521a930ff809d7a19e8600359a70b3e19bbbef0735c23b65d26d" Feb 16 21:24:09.883333 master-0 kubenswrapper[38936]: I0216 21:24:09.883239 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72ee9e35c766aea904898f2e9f2ffaca" path="/var/lib/kubelet/pods/72ee9e35c766aea904898f2e9f2ffaca/volumes" Feb 16 21:24:09.884234 master-0 kubenswrapper[38936]: I0216 21:24:09.883850 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8fa563c7331931f00ce0006e522f0f1" path="/var/lib/kubelet/pods/b8fa563c7331931f00ce0006e522f0f1/volumes" Feb 16 21:24:09.968695 master-0 kubenswrapper[38936]: I0216 21:24:09.968577 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"d730a9d3-3e5b-4676-a707-8d2ed41502be","Type":"ContainerStarted","Data":"9b3b1a3ef50c5958a54eb15cef494f2d653137ad8b30b4100b19790bcf34ae26"} Feb 16 21:24:09.973316 master-0 kubenswrapper[38936]: I0216 21:24:09.973258 38936 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-xzx78 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Feb 16 21:24:09.973489 master-0 kubenswrapper[38936]: I0216 21:24:09.973330 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-xzx78" podUID="42c29d0d-12cf-4737-83ed-1dcfe74b2b26" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Feb 16 21:24:10.799975 master-0 kubenswrapper[38936]: I0216 21:24:10.799839 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 21:24:10.800503 master-0 kubenswrapper[38936]: E0216 21:24:10.800140 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b69c32d-3b8d-44d6-8547-9e682d069266" containerName="installer" Feb 16 21:24:10.800503 master-0 kubenswrapper[38936]: I0216 21:24:10.800157 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b69c32d-3b8d-44d6-8547-9e682d069266" containerName="installer" Feb 16 21:24:10.800503 master-0 kubenswrapper[38936]: E0216 21:24:10.800209 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff084640-8e23-45e8-9d0b-6aa3b030c51f" containerName="installer" Feb 16 21:24:10.800503 master-0 kubenswrapper[38936]: I0216 21:24:10.800218 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff084640-8e23-45e8-9d0b-6aa3b030c51f" containerName="installer" Feb 16 21:24:10.800503 master-0 kubenswrapper[38936]: I0216 21:24:10.800373 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b69c32d-3b8d-44d6-8547-9e682d069266" containerName="installer" Feb 16 21:24:10.800503 master-0 kubenswrapper[38936]: I0216 21:24:10.800396 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff084640-8e23-45e8-9d0b-6aa3b030c51f" containerName="installer" Feb 16 21:24:10.801214 master-0 kubenswrapper[38936]: I0216 21:24:10.800948 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:10.817784 master-0 kubenswrapper[38936]: I0216 21:24:10.817699 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 21:24:10.919178 master-0 kubenswrapper[38936]: I0216 21:24:10.919071 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:10.920053 master-0 kubenswrapper[38936]: I0216 21:24:10.919266 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17667cac-8179-4942-9e5c-be45e6b56a96-kube-api-access\") pod \"installer-3-master-0\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:10.920053 master-0 kubenswrapper[38936]: I0216 21:24:10.919674 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-var-lock\") pod \"installer-3-master-0\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:11.021526 master-0 kubenswrapper[38936]: I0216 21:24:11.021408 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17667cac-8179-4942-9e5c-be45e6b56a96-kube-api-access\") pod \"installer-3-master-0\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:11.021866 master-0 kubenswrapper[38936]: I0216 21:24:11.021629 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-var-lock\") pod \"installer-3-master-0\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:11.021866 master-0 kubenswrapper[38936]: I0216 21:24:11.021722 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-var-lock\") pod \"installer-3-master-0\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:11.021866 master-0 kubenswrapper[38936]: I0216 21:24:11.021741 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:11.021866 master-0 kubenswrapper[38936]: I0216 21:24:11.021782 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:11.049356 master-0 kubenswrapper[38936]: I0216 21:24:11.049270 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17667cac-8179-4942-9e5c-be45e6b56a96-kube-api-access\") pod \"installer-3-master-0\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:11.189700 master-0 kubenswrapper[38936]: I0216 21:24:11.189515 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:11.635178 master-0 kubenswrapper[38936]: I0216 21:24:11.635095 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 21:24:11.995479 master-0 kubenswrapper[38936]: I0216 21:24:11.995026 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"17667cac-8179-4942-9e5c-be45e6b56a96","Type":"ContainerStarted","Data":"29345ca9988c94326569923a2bc8c5bc226d9073c26ceb13b5d43e32794504e1"} Feb 16 21:24:12.118146 master-0 kubenswrapper[38936]: I0216 21:24:12.117528 38936 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-xzx78 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Feb 16 21:24:12.118146 master-0 kubenswrapper[38936]: I0216 21:24:12.117544 38936 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-xzx78 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Feb 16 21:24:12.118146 master-0 kubenswrapper[38936]: I0216 21:24:12.117670 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-xzx78" podUID="42c29d0d-12cf-4737-83ed-1dcfe74b2b26" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Feb 16 21:24:12.118146 master-0 kubenswrapper[38936]: I0216 21:24:12.117719 38936 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-dcd7b7d95-xzx78" podUID="42c29d0d-12cf-4737-83ed-1dcfe74b2b26" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Feb 16 21:24:12.146789 master-0 kubenswrapper[38936]: I0216 21:24:12.146696 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-84f5b46974-6pcrm" podUID="6f9ac325-9b82-4250-922d-40265fff9322" containerName="console" containerID="cri-o://c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19" gracePeriod=15 Feb 16 21:24:12.321637 master-0 kubenswrapper[38936]: I0216 21:24:12.321517 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:24:12.321864 master-0 kubenswrapper[38936]: I0216 21:24:12.321699 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:24:12.660095 master-0 kubenswrapper[38936]: I0216 21:24:12.660012 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84f5b46974-6pcrm_6f9ac325-9b82-4250-922d-40265fff9322/console/0.log" Feb 16 21:24:12.660381 master-0 kubenswrapper[38936]: I0216 21:24:12.660113 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:24:12.749336 master-0 kubenswrapper[38936]: I0216 21:24:12.749261 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-console-config\") pod \"6f9ac325-9b82-4250-922d-40265fff9322\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " Feb 16 21:24:12.749566 master-0 kubenswrapper[38936]: I0216 21:24:12.749350 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqjlq\" (UniqueName: \"kubernetes.io/projected/6f9ac325-9b82-4250-922d-40265fff9322-kube-api-access-gqjlq\") pod \"6f9ac325-9b82-4250-922d-40265fff9322\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " Feb 16 21:24:12.749566 master-0 kubenswrapper[38936]: I0216 21:24:12.749401 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-serving-cert\") pod \"6f9ac325-9b82-4250-922d-40265fff9322\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " Feb 16 21:24:12.749566 master-0 kubenswrapper[38936]: I0216 21:24:12.749430 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-service-ca\") pod \"6f9ac325-9b82-4250-922d-40265fff9322\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " Feb 16 21:24:12.749566 master-0 kubenswrapper[38936]: I0216 21:24:12.749519 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-oauth-config\") pod \"6f9ac325-9b82-4250-922d-40265fff9322\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " Feb 16 21:24:12.749566 master-0 kubenswrapper[38936]: I0216 21:24:12.749541 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-oauth-serving-cert\") pod \"6f9ac325-9b82-4250-922d-40265fff9322\" (UID: \"6f9ac325-9b82-4250-922d-40265fff9322\") " Feb 16 21:24:12.750730 master-0 kubenswrapper[38936]: I0216 21:24:12.750682 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6f9ac325-9b82-4250-922d-40265fff9322" (UID: "6f9ac325-9b82-4250-922d-40265fff9322"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:24:12.751176 master-0 kubenswrapper[38936]: I0216 21:24:12.751132 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-console-config" (OuterVolumeSpecName: "console-config") pod "6f9ac325-9b82-4250-922d-40265fff9322" (UID: "6f9ac325-9b82-4250-922d-40265fff9322"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:24:12.751328 master-0 kubenswrapper[38936]: I0216 21:24:12.751286 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-service-ca" (OuterVolumeSpecName: "service-ca") pod "6f9ac325-9b82-4250-922d-40265fff9322" (UID: "6f9ac325-9b82-4250-922d-40265fff9322"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:24:12.754697 master-0 kubenswrapper[38936]: I0216 21:24:12.754054 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6f9ac325-9b82-4250-922d-40265fff9322" (UID: "6f9ac325-9b82-4250-922d-40265fff9322"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:24:12.755011 master-0 kubenswrapper[38936]: I0216 21:24:12.754920 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6f9ac325-9b82-4250-922d-40265fff9322" (UID: "6f9ac325-9b82-4250-922d-40265fff9322"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:24:12.756517 master-0 kubenswrapper[38936]: I0216 21:24:12.756381 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f9ac325-9b82-4250-922d-40265fff9322-kube-api-access-gqjlq" (OuterVolumeSpecName: "kube-api-access-gqjlq") pod "6f9ac325-9b82-4250-922d-40265fff9322" (UID: "6f9ac325-9b82-4250-922d-40265fff9322"). InnerVolumeSpecName "kube-api-access-gqjlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:24:12.852082 master-0 kubenswrapper[38936]: I0216 21:24:12.851982 38936 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:12.852082 master-0 kubenswrapper[38936]: I0216 21:24:12.852054 38936 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:12.852082 master-0 kubenswrapper[38936]: I0216 21:24:12.852068 38936 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:12.852082 master-0 kubenswrapper[38936]: I0216 21:24:12.852085 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqjlq\" (UniqueName: \"kubernetes.io/projected/6f9ac325-9b82-4250-922d-40265fff9322-kube-api-access-gqjlq\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:12.852082 master-0 kubenswrapper[38936]: I0216 21:24:12.852098 38936 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f9ac325-9b82-4250-922d-40265fff9322-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:12.852541 master-0 kubenswrapper[38936]: I0216 21:24:12.852115 38936 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6f9ac325-9b82-4250-922d-40265fff9322-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:13.008454 master-0 kubenswrapper[38936]: I0216 21:24:13.008369 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-84f5b46974-6pcrm_6f9ac325-9b82-4250-922d-40265fff9322/console/0.log" Feb 16 21:24:13.009923 master-0 kubenswrapper[38936]: I0216 21:24:13.008478 38936 generic.go:334] "Generic (PLEG): container finished" podID="6f9ac325-9b82-4250-922d-40265fff9322" containerID="c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19" exitCode=2 Feb 16 21:24:13.009923 master-0 kubenswrapper[38936]: I0216 21:24:13.008621 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84f5b46974-6pcrm" event={"ID":"6f9ac325-9b82-4250-922d-40265fff9322","Type":"ContainerDied","Data":"c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19"} Feb 16 21:24:13.009923 master-0 kubenswrapper[38936]: I0216 21:24:13.008634 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84f5b46974-6pcrm" Feb 16 21:24:13.009923 master-0 kubenswrapper[38936]: I0216 21:24:13.008737 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84f5b46974-6pcrm" event={"ID":"6f9ac325-9b82-4250-922d-40265fff9322","Type":"ContainerDied","Data":"cbe6879bbdea1991c514284b01ce06f4c9ad1b0668ed16f4d2ef44c0793724d2"} Feb 16 21:24:13.009923 master-0 kubenswrapper[38936]: I0216 21:24:13.008753 38936 scope.go:117] "RemoveContainer" containerID="c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19" Feb 16 21:24:13.011316 master-0 kubenswrapper[38936]: I0216 21:24:13.011093 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"17667cac-8179-4942-9e5c-be45e6b56a96","Type":"ContainerStarted","Data":"ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389"} Feb 16 21:24:13.028234 master-0 kubenswrapper[38936]: I0216 21:24:13.028034 38936 scope.go:117] "RemoveContainer" containerID="c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19" Feb 16 21:24:13.029093 master-0 kubenswrapper[38936]: E0216 21:24:13.029055 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19\": container with ID starting with c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19 not found: ID does not exist" containerID="c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19" Feb 16 21:24:13.029220 master-0 kubenswrapper[38936]: I0216 21:24:13.029093 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19"} err="failed to get container status \"c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19\": rpc error: code = NotFound desc = could not find container \"c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19\": container with ID starting with c3706c2c027ec630a5b3a0e913cc73b74286e77fe9eaa2bd99b0f9ba98dd9a19 not found: ID does not exist" Feb 16 21:24:13.038820 master-0 kubenswrapper[38936]: I0216 21:24:13.038753 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=3.038740812 podStartE2EDuration="3.038740812s" podCreationTimestamp="2026-02-16 21:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:24:13.038312821 +0000 UTC m=+83.390316203" watchObservedRunningTime="2026-02-16 21:24:13.038740812 +0000 UTC m=+83.390744174" Feb 16 21:24:13.057189 master-0 kubenswrapper[38936]: I0216 21:24:13.057045 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-84f5b46974-6pcrm"] Feb 16 21:24:13.061895 master-0 kubenswrapper[38936]: I0216 21:24:13.061834 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-84f5b46974-6pcrm"] Feb 16 21:24:13.873679 master-0 kubenswrapper[38936]: I0216 21:24:13.873569 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:13.891175 master-0 kubenswrapper[38936]: I0216 21:24:13.891102 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f9ac325-9b82-4250-922d-40265fff9322" path="/var/lib/kubelet/pods/6f9ac325-9b82-4250-922d-40265fff9322/volumes" Feb 16 21:24:13.909889 master-0 kubenswrapper[38936]: I0216 21:24:13.909834 38936 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c700271f-ec65-4cd2-b872-49eaa1ee37b0" Feb 16 21:24:13.909889 master-0 kubenswrapper[38936]: I0216 21:24:13.909875 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c700271f-ec65-4cd2-b872-49eaa1ee37b0" Feb 16 21:24:14.227474 master-0 kubenswrapper[38936]: I0216 21:24:14.227337 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:24:14.347987 master-0 kubenswrapper[38936]: I0216 21:24:14.347933 38936 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:14.353562 master-0 kubenswrapper[38936]: I0216 21:24:14.353451 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:24:14.498697 master-0 kubenswrapper[38936]: I0216 21:24:14.498627 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:14.501002 master-0 kubenswrapper[38936]: I0216 21:24:14.500954 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:24:15.035124 master-0 kubenswrapper[38936]: I0216 21:24:15.035074 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"fc19ea17c4f595b135412c661d90b9a7","Type":"ContainerStarted","Data":"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a"} Feb 16 21:24:15.035228 master-0 kubenswrapper[38936]: I0216 21:24:15.035131 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"fc19ea17c4f595b135412c661d90b9a7","Type":"ContainerStarted","Data":"aa17d688996f063e4c19389568114f0246197f31002fe2032ab43c3a7cd5fa61"} Feb 16 21:24:15.878862 master-0 kubenswrapper[38936]: I0216 21:24:15.878608 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:24:15.909191 master-0 kubenswrapper[38936]: I0216 21:24:15.909090 38936 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="2be3e87e-a4ef-41f7-be9d-22898625501e" Feb 16 21:24:15.909191 master-0 kubenswrapper[38936]: I0216 21:24:15.909178 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="2be3e87e-a4ef-41f7-be9d-22898625501e" Feb 16 21:24:15.920765 master-0 kubenswrapper[38936]: I0216 21:24:15.920699 38936 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:24:15.928068 master-0 kubenswrapper[38936]: I0216 21:24:15.928004 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 21:24:15.931069 master-0 kubenswrapper[38936]: I0216 21:24:15.931006 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 21:24:15.940867 master-0 kubenswrapper[38936]: I0216 21:24:15.940833 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:24:15.944221 master-0 kubenswrapper[38936]: I0216 21:24:15.944160 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 21:24:15.967767 master-0 kubenswrapper[38936]: W0216 21:24:15.967684 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice/crio-53d919d1ad8f199e0ad9b12c88a1a911a20eaa85d5a089b749ca1cfd2bc1576f WatchSource:0}: Error finding container 53d919d1ad8f199e0ad9b12c88a1a911a20eaa85d5a089b749ca1cfd2bc1576f: Status 404 returned error can't find the container with id 53d919d1ad8f199e0ad9b12c88a1a911a20eaa85d5a089b749ca1cfd2bc1576f Feb 16 21:24:16.046082 master-0 kubenswrapper[38936]: I0216 21:24:16.046020 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"53d919d1ad8f199e0ad9b12c88a1a911a20eaa85d5a089b749ca1cfd2bc1576f"} Feb 16 21:24:16.049134 master-0 kubenswrapper[38936]: I0216 21:24:16.049105 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"fc19ea17c4f595b135412c661d90b9a7","Type":"ContainerStarted","Data":"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c"} Feb 16 21:24:16.049191 master-0 kubenswrapper[38936]: I0216 21:24:16.049132 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"fc19ea17c4f595b135412c661d90b9a7","Type":"ContainerStarted","Data":"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241"} Feb 16 21:24:16.049191 master-0 kubenswrapper[38936]: I0216 21:24:16.049147 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"fc19ea17c4f595b135412c661d90b9a7","Type":"ContainerStarted","Data":"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05"} Feb 16 21:24:16.092618 master-0 kubenswrapper[38936]: I0216 21:24:16.092526 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.092506079 podStartE2EDuration="2.092506079s" podCreationTimestamp="2026-02-16 21:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:24:16.090453914 +0000 UTC m=+86.442457286" watchObservedRunningTime="2026-02-16 21:24:16.092506079 +0000 UTC m=+86.444509441" Feb 16 21:24:17.064500 master-0 kubenswrapper[38936]: I0216 21:24:17.064381 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"2c0c85d7231f1bdd5fc06dda0fe48df8f20eb06873450f9dfdba4fceeffeec29"} Feb 16 21:24:17.495091 master-0 kubenswrapper[38936]: I0216 21:24:17.494966 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:24:17.495091 master-0 kubenswrapper[38936]: I0216 21:24:17.495033 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:24:19.783331 master-0 kubenswrapper[38936]: I0216 21:24:19.783242 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:24:19.788585 master-0 kubenswrapper[38936]: I0216 21:24:19.788527 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Feb 16 21:24:19.884247 master-0 kubenswrapper[38936]: I0216 21:24:19.884182 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") pod \"1f8a26db-5a90-4da9-9074-33256ef17100\" (UID: \"1f8a26db-5a90-4da9-9074-33256ef17100\") " Feb 16 21:24:19.889162 master-0 kubenswrapper[38936]: I0216 21:24:19.889095 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1f8a26db-5a90-4da9-9074-33256ef17100" (UID: "1f8a26db-5a90-4da9-9074-33256ef17100"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:24:19.987339 master-0 kubenswrapper[38936]: I0216 21:24:19.987278 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f8a26db-5a90-4da9-9074-33256ef17100-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:22.136562 master-0 kubenswrapper[38936]: I0216 21:24:22.136487 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-dcd7b7d95-xzx78" Feb 16 21:24:22.321420 master-0 kubenswrapper[38936]: I0216 21:24:22.321330 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:24:22.321784 master-0 kubenswrapper[38936]: I0216 21:24:22.321453 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:24:24.499374 master-0 kubenswrapper[38936]: I0216 21:24:24.499272 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:24.499374 master-0 kubenswrapper[38936]: I0216 21:24:24.499325 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:24.499374 master-0 kubenswrapper[38936]: I0216 21:24:24.499338 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:24.499374 master-0 kubenswrapper[38936]: I0216 21:24:24.499349 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:24.501015 master-0 kubenswrapper[38936]: I0216 21:24:24.499732 38936 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 21:24:24.501015 master-0 kubenswrapper[38936]: I0216 21:24:24.499788 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 21:24:24.506482 master-0 kubenswrapper[38936]: I0216 21:24:24.505964 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:25.133623 master-0 kubenswrapper[38936]: I0216 21:24:25.133556 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:25.599824 master-0 kubenswrapper[38936]: I0216 21:24:25.599747 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 21:24:25.600917 master-0 kubenswrapper[38936]: I0216 21:24:25.600008 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-3-master-0" podUID="17667cac-8179-4942-9e5c-be45e6b56a96" containerName="installer" containerID="cri-o://ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389" gracePeriod=30 Feb 16 21:24:26.129277 master-0 kubenswrapper[38936]: I0216 21:24:26.129218 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_17667cac-8179-4942-9e5c-be45e6b56a96/installer/0.log" Feb 16 21:24:26.129511 master-0 kubenswrapper[38936]: I0216 21:24:26.129301 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:26.140949 master-0 kubenswrapper[38936]: I0216 21:24:26.140886 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_17667cac-8179-4942-9e5c-be45e6b56a96/installer/0.log" Feb 16 21:24:26.140949 master-0 kubenswrapper[38936]: I0216 21:24:26.140948 38936 generic.go:334] "Generic (PLEG): container finished" podID="17667cac-8179-4942-9e5c-be45e6b56a96" containerID="ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389" exitCode=1 Feb 16 21:24:26.141320 master-0 kubenswrapper[38936]: I0216 21:24:26.141096 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 21:24:26.141320 master-0 kubenswrapper[38936]: I0216 21:24:26.141173 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"17667cac-8179-4942-9e5c-be45e6b56a96","Type":"ContainerDied","Data":"ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389"} Feb 16 21:24:26.141320 master-0 kubenswrapper[38936]: I0216 21:24:26.141228 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"17667cac-8179-4942-9e5c-be45e6b56a96","Type":"ContainerDied","Data":"29345ca9988c94326569923a2bc8c5bc226d9073c26ceb13b5d43e32794504e1"} Feb 16 21:24:26.141320 master-0 kubenswrapper[38936]: I0216 21:24:26.141247 38936 scope.go:117] "RemoveContainer" containerID="ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389" Feb 16 21:24:26.174027 master-0 kubenswrapper[38936]: I0216 21:24:26.173968 38936 scope.go:117] "RemoveContainer" containerID="ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389" Feb 16 21:24:26.174585 master-0 kubenswrapper[38936]: E0216 21:24:26.174527 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389\": container with ID starting with ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389 not found: ID does not exist" containerID="ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389" Feb 16 21:24:26.174644 master-0 kubenswrapper[38936]: I0216 21:24:26.174596 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389"} err="failed to get container status \"ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389\": rpc error: code = NotFound desc = could not find container \"ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389\": container with ID starting with ee4bee291e9274b495a4f31fd39b6bc5e73b8114d618d49a14645b034afd5389 not found: ID does not exist" Feb 16 21:24:26.293661 master-0 kubenswrapper[38936]: I0216 21:24:26.293577 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-var-lock\") pod \"17667cac-8179-4942-9e5c-be45e6b56a96\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " Feb 16 21:24:26.293961 master-0 kubenswrapper[38936]: I0216 21:24:26.293946 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17667cac-8179-4942-9e5c-be45e6b56a96-kube-api-access\") pod \"17667cac-8179-4942-9e5c-be45e6b56a96\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " Feb 16 21:24:26.294064 master-0 kubenswrapper[38936]: I0216 21:24:26.294052 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-kubelet-dir\") pod \"17667cac-8179-4942-9e5c-be45e6b56a96\" (UID: \"17667cac-8179-4942-9e5c-be45e6b56a96\") " Feb 16 21:24:26.294240 master-0 kubenswrapper[38936]: I0216 21:24:26.293688 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-var-lock" (OuterVolumeSpecName: "var-lock") pod "17667cac-8179-4942-9e5c-be45e6b56a96" (UID: "17667cac-8179-4942-9e5c-be45e6b56a96"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:26.294292 master-0 kubenswrapper[38936]: I0216 21:24:26.294119 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "17667cac-8179-4942-9e5c-be45e6b56a96" (UID: "17667cac-8179-4942-9e5c-be45e6b56a96"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:26.294499 master-0 kubenswrapper[38936]: I0216 21:24:26.294485 38936 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:26.294569 master-0 kubenswrapper[38936]: I0216 21:24:26.294557 38936 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17667cac-8179-4942-9e5c-be45e6b56a96-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:26.296685 master-0 kubenswrapper[38936]: I0216 21:24:26.296618 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17667cac-8179-4942-9e5c-be45e6b56a96-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "17667cac-8179-4942-9e5c-be45e6b56a96" (UID: "17667cac-8179-4942-9e5c-be45e6b56a96"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:24:26.396451 master-0 kubenswrapper[38936]: I0216 21:24:26.396331 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17667cac-8179-4942-9e5c-be45e6b56a96-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:26.503806 master-0 kubenswrapper[38936]: I0216 21:24:26.502057 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 21:24:26.516112 master-0 kubenswrapper[38936]: I0216 21:24:26.508021 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 21:24:27.493602 master-0 kubenswrapper[38936]: I0216 21:24:27.493538 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:24:27.494199 master-0 kubenswrapper[38936]: I0216 21:24:27.493623 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:24:27.904441 master-0 kubenswrapper[38936]: I0216 21:24:27.904326 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17667cac-8179-4942-9e5c-be45e6b56a96" path="/var/lib/kubelet/pods/17667cac-8179-4942-9e5c-be45e6b56a96/volumes" Feb 16 21:24:29.611208 master-0 kubenswrapper[38936]: I0216 21:24:29.610976 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 21:24:29.612350 master-0 kubenswrapper[38936]: E0216 21:24:29.611941 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17667cac-8179-4942-9e5c-be45e6b56a96" containerName="installer" Feb 16 21:24:29.612350 master-0 kubenswrapper[38936]: I0216 21:24:29.612018 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="17667cac-8179-4942-9e5c-be45e6b56a96" containerName="installer" Feb 16 21:24:29.612350 master-0 kubenswrapper[38936]: E0216 21:24:29.612040 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9ac325-9b82-4250-922d-40265fff9322" containerName="console" Feb 16 21:24:29.612350 master-0 kubenswrapper[38936]: I0216 21:24:29.612053 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9ac325-9b82-4250-922d-40265fff9322" containerName="console" Feb 16 21:24:29.612840 master-0 kubenswrapper[38936]: I0216 21:24:29.612743 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f9ac325-9b82-4250-922d-40265fff9322" containerName="console" Feb 16 21:24:29.612981 master-0 kubenswrapper[38936]: I0216 21:24:29.612848 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="17667cac-8179-4942-9e5c-be45e6b56a96" containerName="installer" Feb 16 21:24:29.615258 master-0 kubenswrapper[38936]: I0216 21:24:29.614046 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:24:29.619997 master-0 kubenswrapper[38936]: I0216 21:24:29.619931 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 21:24:29.761543 master-0 kubenswrapper[38936]: I0216 21:24:29.761449 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:24:29.761927 master-0 kubenswrapper[38936]: I0216 21:24:29.761871 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-var-lock\") pod \"installer-4-master-0\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:24:29.762004 master-0 kubenswrapper[38936]: I0216 21:24:29.761951 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:24:29.864154 master-0 kubenswrapper[38936]: I0216 21:24:29.863990 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:24:29.864369 master-0 kubenswrapper[38936]: I0216 21:24:29.864187 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-var-lock\") pod \"installer-4-master-0\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:24:29.864369 master-0 kubenswrapper[38936]: I0216 21:24:29.864199 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:24:29.864369 master-0 kubenswrapper[38936]: I0216 21:24:29.864225 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:24:29.864539 master-0 kubenswrapper[38936]: I0216 21:24:29.864374 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-var-lock\") pod \"installer-4-master-0\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:24:29.886130 master-0 kubenswrapper[38936]: I0216 21:24:29.886059 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:24:29.986813 master-0 kubenswrapper[38936]: I0216 21:24:29.986746 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:24:30.614336 master-0 kubenswrapper[38936]: I0216 21:24:30.614252 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 21:24:30.627409 master-0 kubenswrapper[38936]: W0216 21:24:30.627320 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6862f5f5_da61_4347_9a9e_cb47b7e1261f.slice/crio-3ebe760bafe9315ab2ea1b58f42b1697bbdfd54e84a1ad8ed4872146b45fb3fa WatchSource:0}: Error finding container 3ebe760bafe9315ab2ea1b58f42b1697bbdfd54e84a1ad8ed4872146b45fb3fa: Status 404 returned error can't find the container with id 3ebe760bafe9315ab2ea1b58f42b1697bbdfd54e84a1ad8ed4872146b45fb3fa Feb 16 21:24:31.191611 master-0 kubenswrapper[38936]: I0216 21:24:31.191537 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6862f5f5-da61-4347-9a9e-cb47b7e1261f","Type":"ContainerStarted","Data":"effb44c0b670182b0b03c2bef5b66ad309f4287e406f57386e4e7b0fc68ea709"} Feb 16 21:24:31.191611 master-0 kubenswrapper[38936]: I0216 21:24:31.191595 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6862f5f5-da61-4347-9a9e-cb47b7e1261f","Type":"ContainerStarted","Data":"3ebe760bafe9315ab2ea1b58f42b1697bbdfd54e84a1ad8ed4872146b45fb3fa"} Feb 16 21:24:32.219007 master-0 kubenswrapper[38936]: I0216 21:24:32.218935 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=3.218915656 podStartE2EDuration="3.218915656s" podCreationTimestamp="2026-02-16 21:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:24:32.216963003 +0000 UTC m=+102.568966385" watchObservedRunningTime="2026-02-16 21:24:32.218915656 +0000 UTC m=+102.570919038" Feb 16 21:24:32.320710 master-0 kubenswrapper[38936]: I0216 21:24:32.320466 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:24:32.320710 master-0 kubenswrapper[38936]: I0216 21:24:32.320525 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:24:34.219523 master-0 kubenswrapper[38936]: I0216 21:24:34.219372 38936 generic.go:334] "Generic (PLEG): container finished" podID="952766c3a88fd12345a552f1277199f9" containerID="2c0c85d7231f1bdd5fc06dda0fe48df8f20eb06873450f9dfdba4fceeffeec29" exitCode=0 Feb 16 21:24:34.219523 master-0 kubenswrapper[38936]: I0216 21:24:34.219437 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerDied","Data":"2c0c85d7231f1bdd5fc06dda0fe48df8f20eb06873450f9dfdba4fceeffeec29"} Feb 16 21:24:34.500436 master-0 kubenswrapper[38936]: I0216 21:24:34.500170 38936 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 21:24:34.500436 master-0 kubenswrapper[38936]: I0216 21:24:34.500272 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 21:24:35.229306 master-0 kubenswrapper[38936]: I0216 21:24:35.229247 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"ef3d52033fc88ff51efe094b46ab8314a20285e09ad3900d6eba6e6699954b26"} Feb 16 21:24:35.229306 master-0 kubenswrapper[38936]: I0216 21:24:35.229313 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"58ba23764f202279218ccf3bf73b285090b052171b54b77d2a945271b86dcdd0"} Feb 16 21:24:35.229941 master-0 kubenswrapper[38936]: I0216 21:24:35.229328 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"fc18e5a1f59becb3ec25b7893c98ac3446a88b2c46178e3447c841122291aeed"} Feb 16 21:24:35.229941 master-0 kubenswrapper[38936]: I0216 21:24:35.229570 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:24:35.249485 master-0 kubenswrapper[38936]: I0216 21:24:35.249395 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=20.249373821 podStartE2EDuration="20.249373821s" podCreationTimestamp="2026-02-16 21:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:24:35.246181756 +0000 UTC m=+105.598185118" watchObservedRunningTime="2026-02-16 21:24:35.249373821 +0000 UTC m=+105.601377193" Feb 16 21:24:37.493506 master-0 kubenswrapper[38936]: I0216 21:24:37.493416 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:24:37.493506 master-0 kubenswrapper[38936]: I0216 21:24:37.493499 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:24:38.254458 master-0 kubenswrapper[38936]: I0216 21:24:38.254382 38936 generic.go:334] "Generic (PLEG): container finished" podID="1489d1b6-d8a1-453a-bff3-8adfd4335903" containerID="25ee620a91a11cdfcf10f317458e9833777a7250c9af0cd0962ed366c5d07a92" exitCode=0 Feb 16 21:24:38.254458 master-0 kubenswrapper[38936]: I0216 21:24:38.254452 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" event={"ID":"1489d1b6-d8a1-453a-bff3-8adfd4335903","Type":"ContainerDied","Data":"25ee620a91a11cdfcf10f317458e9833777a7250c9af0cd0962ed366c5d07a92"} Feb 16 21:24:38.255053 master-0 kubenswrapper[38936]: I0216 21:24:38.254988 38936 scope.go:117] "RemoveContainer" containerID="25ee620a91a11cdfcf10f317458e9833777a7250c9af0cd0962ed366c5d07a92" Feb 16 21:24:39.263826 master-0 kubenswrapper[38936]: I0216 21:24:39.263779 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_d730a9d3-3e5b-4676-a707-8d2ed41502be/installer/0.log" Feb 16 21:24:39.264775 master-0 kubenswrapper[38936]: I0216 21:24:39.264729 38936 generic.go:334] "Generic (PLEG): container finished" podID="d730a9d3-3e5b-4676-a707-8d2ed41502be" containerID="9b3b1a3ef50c5958a54eb15cef494f2d653137ad8b30b4100b19790bcf34ae26" exitCode=137 Feb 16 21:24:39.265034 master-0 kubenswrapper[38936]: I0216 21:24:39.264828 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"d730a9d3-3e5b-4676-a707-8d2ed41502be","Type":"ContainerDied","Data":"9b3b1a3ef50c5958a54eb15cef494f2d653137ad8b30b4100b19790bcf34ae26"} Feb 16 21:24:39.267469 master-0 kubenswrapper[38936]: I0216 21:24:39.267425 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" event={"ID":"1489d1b6-d8a1-453a-bff3-8adfd4335903","Type":"ContainerStarted","Data":"871f46e938656ef846c5525d2292afdd15ba15225bc063c38e05de3503244dc1"} Feb 16 21:24:39.268132 master-0 kubenswrapper[38936]: I0216 21:24:39.268070 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:24:39.272905 master-0 kubenswrapper[38936]: I0216 21:24:39.272805 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:24:39.866781 master-0 kubenswrapper[38936]: I0216 21:24:39.866675 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_d730a9d3-3e5b-4676-a707-8d2ed41502be/installer/0.log" Feb 16 21:24:39.866781 master-0 kubenswrapper[38936]: I0216 21:24:39.866769 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:24:40.024563 master-0 kubenswrapper[38936]: I0216 21:24:40.024477 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-var-lock\") pod \"d730a9d3-3e5b-4676-a707-8d2ed41502be\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " Feb 16 21:24:40.024865 master-0 kubenswrapper[38936]: I0216 21:24:40.024629 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d730a9d3-3e5b-4676-a707-8d2ed41502be-kube-api-access\") pod \"d730a9d3-3e5b-4676-a707-8d2ed41502be\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " Feb 16 21:24:40.024865 master-0 kubenswrapper[38936]: I0216 21:24:40.024700 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-kubelet-dir\") pod \"d730a9d3-3e5b-4676-a707-8d2ed41502be\" (UID: \"d730a9d3-3e5b-4676-a707-8d2ed41502be\") " Feb 16 21:24:40.025014 master-0 kubenswrapper[38936]: I0216 21:24:40.024868 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d730a9d3-3e5b-4676-a707-8d2ed41502be" (UID: "d730a9d3-3e5b-4676-a707-8d2ed41502be"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:40.025199 master-0 kubenswrapper[38936]: I0216 21:24:40.025162 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-var-lock" (OuterVolumeSpecName: "var-lock") pod "d730a9d3-3e5b-4676-a707-8d2ed41502be" (UID: "d730a9d3-3e5b-4676-a707-8d2ed41502be"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:24:40.033187 master-0 kubenswrapper[38936]: I0216 21:24:40.033083 38936 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:40.033322 master-0 kubenswrapper[38936]: I0216 21:24:40.033201 38936 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d730a9d3-3e5b-4676-a707-8d2ed41502be-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:40.033322 master-0 kubenswrapper[38936]: I0216 21:24:40.033196 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d730a9d3-3e5b-4676-a707-8d2ed41502be-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d730a9d3-3e5b-4676-a707-8d2ed41502be" (UID: "d730a9d3-3e5b-4676-a707-8d2ed41502be"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:24:40.135136 master-0 kubenswrapper[38936]: I0216 21:24:40.135012 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d730a9d3-3e5b-4676-a707-8d2ed41502be-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:24:40.276203 master-0 kubenswrapper[38936]: I0216 21:24:40.276167 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_d730a9d3-3e5b-4676-a707-8d2ed41502be/installer/0.log" Feb 16 21:24:40.276945 master-0 kubenswrapper[38936]: I0216 21:24:40.276873 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"d730a9d3-3e5b-4676-a707-8d2ed41502be","Type":"ContainerDied","Data":"4fa54033dcbe59cfe4901d6785a100b91aa2a655a919634b4b4ff49dbf42b27b"} Feb 16 21:24:40.277001 master-0 kubenswrapper[38936]: I0216 21:24:40.276954 38936 scope.go:117] "RemoveContainer" containerID="9b3b1a3ef50c5958a54eb15cef494f2d653137ad8b30b4100b19790bcf34ae26" Feb 16 21:24:40.277001 master-0 kubenswrapper[38936]: I0216 21:24:40.276898 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 21:24:40.316409 master-0 kubenswrapper[38936]: I0216 21:24:40.316316 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 21:24:40.325327 master-0 kubenswrapper[38936]: I0216 21:24:40.325257 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 21:24:41.881248 master-0 kubenswrapper[38936]: I0216 21:24:41.881190 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d730a9d3-3e5b-4676-a707-8d2ed41502be" path="/var/lib/kubelet/pods/d730a9d3-3e5b-4676-a707-8d2ed41502be/volumes" Feb 16 21:24:42.321496 master-0 kubenswrapper[38936]: I0216 21:24:42.321363 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:24:42.321496 master-0 kubenswrapper[38936]: I0216 21:24:42.321478 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:24:44.500541 master-0 kubenswrapper[38936]: I0216 21:24:44.500473 38936 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 21:24:44.501451 master-0 kubenswrapper[38936]: I0216 21:24:44.501350 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 21:24:44.501607 master-0 kubenswrapper[38936]: I0216 21:24:44.501491 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:24:44.502849 master-0 kubenswrapper[38936]: I0216 21:24:44.502784 38936 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 21:24:44.503115 master-0 kubenswrapper[38936]: I0216 21:24:44.503056 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager" containerID="cri-o://6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a" gracePeriod=30 Feb 16 21:24:47.493582 master-0 kubenswrapper[38936]: I0216 21:24:47.493491 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:24:47.494465 master-0 kubenswrapper[38936]: I0216 21:24:47.493600 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:24:52.321139 master-0 kubenswrapper[38936]: I0216 21:24:52.321008 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:24:52.322073 master-0 kubenswrapper[38936]: I0216 21:24:52.321140 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:24:57.493508 master-0 kubenswrapper[38936]: I0216 21:24:57.493413 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:24:57.494483 master-0 kubenswrapper[38936]: I0216 21:24:57.493542 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:25:02.321094 master-0 kubenswrapper[38936]: I0216 21:25:02.320999 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:25:02.321778 master-0 kubenswrapper[38936]: I0216 21:25:02.321095 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:25:07.493518 master-0 kubenswrapper[38936]: I0216 21:25:07.493350 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:25:07.493518 master-0 kubenswrapper[38936]: I0216 21:25:07.493491 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:25:10.685705 master-0 kubenswrapper[38936]: I0216 21:25:10.685621 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-57ddf7d868-wm6cg"] Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: E0216 21:25:10.686019 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d730a9d3-3e5b-4676-a707-8d2ed41502be" containerName="installer" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.686035 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="d730a9d3-3e5b-4676-a707-8d2ed41502be" containerName="installer" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.686318 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="d730a9d3-3e5b-4676-a707-8d2ed41502be" containerName="installer" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.686958 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.688767 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-f886f46f4-gz92q"] Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.690826 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-c0v76jahdu8si" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.690952 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.693397 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.693440 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.693850 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-z2nzd" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.693870 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.694248 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.697690 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-77f5595c8c-8jsq7"] Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.699288 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.703092 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.705677 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.706037 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.706316 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.706676 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-lxr8m" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.706677 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.707545 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.707565 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.708041 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.709751 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7m8u98371q9c9" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.709905 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.710094 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.710376 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.710498 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.710521 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.710657 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.710669 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-rtvdz" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.711102 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.712905 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.714885 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.716373 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.716846 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.717122 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-nztqm" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.717213 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.717257 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.717259 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.717598 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-a3un9as7vf9sv" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.717724 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.717803 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.717973 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.719058 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.723396 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.725775 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.727620 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 21:25:10.733698 master-0 kubenswrapper[38936]: I0216 21:25:10.728627 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 21:25:10.800172 master-0 kubenswrapper[38936]: I0216 21:25:10.800113 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-76c9c896c-pz2bk"] Feb 16 21:25:10.800454 master-0 kubenswrapper[38936]: I0216 21:25:10.800414 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" podUID="4a9f4f96-ca31-4959-93fe-c094caf8e077" containerName="metrics-server" containerID="cri-o://717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7" gracePeriod=170 Feb 16 21:25:10.804108 master-0 kubenswrapper[38936]: I0216 21:25:10.802381 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6998cd96fb-bgcb2"] Feb 16 21:25:10.804108 master-0 kubenswrapper[38936]: I0216 21:25:10.802582 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" podUID="408a9364-3730-4017-b1e4-c85d6a504168" containerName="controller-manager" containerID="cri-o://998c9ae589b8ae43e110fa0bf1929dd53f4179a605ee219bd9e74970ce1b2465" gracePeriod=30 Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.823723 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.823784 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de36187a-e7bd-445a-ba5e-3fcff71d0175-config-out\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.823813 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b54b7c9-1b32-458b-b231-7e64b91a1a93-telemeter-trusted-ca-bundle\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.823841 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/788d5882-22df-4b55-ae2f-4a92fba7e889-config-out\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.823863 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b54b7c9-1b32-458b-b231-7e64b91a1a93-serving-certs-ca-bundle\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.823879 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.823908 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/788d5882-22df-4b55-ae2f-4a92fba7e889-tls-assets\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.823927 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.823981 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a89f49f8-e2cb-40ae-b447-12aee110e1f4-secret-metrics-client-certs\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.823997 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grvhp\" (UniqueName: \"kubernetes.io/projected/a89f49f8-e2cb-40ae-b447-12aee110e1f4-kube-api-access-grvhp\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824017 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824033 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-telemeter-client-tls\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824052 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824069 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a89f49f8-e2cb-40ae-b447-12aee110e1f4-metrics-server-audit-profiles\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824092 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/788d5882-22df-4b55-ae2f-4a92fba7e889-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824114 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824129 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/de36187a-e7bd-445a-ba5e-3fcff71d0175-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824148 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824163 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-federate-client-tls\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824186 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvlzc\" (UniqueName: \"kubernetes.io/projected/329b8c10-cfb4-49bc-ac25-7b4c724afa31-kube-api-access-cvlzc\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824209 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824227 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824248 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824269 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/788d5882-22df-4b55-ae2f-4a92fba7e889-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824286 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824304 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824321 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-config\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824337 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-tls\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824355 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-web-config\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824371 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de36187a-e7bd-445a-ba5e-3fcff71d0175-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824388 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkkbl\" (UniqueName: \"kubernetes.io/projected/de36187a-e7bd-445a-ba5e-3fcff71d0175-kube-api-access-nkkbl\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824402 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a89f49f8-e2cb-40ae-b447-12aee110e1f4-audit-log\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824417 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-config-volume\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824434 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824450 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/788d5882-22df-4b55-ae2f-4a92fba7e889-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824467 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-grpc-tls\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824486 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824502 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824522 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-secret-telemeter-client\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824543 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a89f49f8-e2cb-40ae-b447-12aee110e1f4-secret-metrics-server-tls\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824558 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824578 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824594 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824613 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824628 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkkvs\" (UniqueName: \"kubernetes.io/projected/9b54b7c9-1b32-458b-b231-7e64b91a1a93-kube-api-access-rkkvs\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824664 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b54b7c9-1b32-458b-b231-7e64b91a1a93-metrics-client-ca\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824681 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824697 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/329b8c10-cfb4-49bc-ac25-7b4c724afa31-metrics-client-ca\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824789 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824849 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a89f49f8-e2cb-40ae-b447-12aee110e1f4-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.824868 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dmr7\" (UniqueName: \"kubernetes.io/projected/788d5882-22df-4b55-ae2f-4a92fba7e889-kube-api-access-9dmr7\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.825050 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-web-config\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.830679 master-0 kubenswrapper[38936]: I0216 21:25:10.825179 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89f49f8-e2cb-40ae-b447-12aee110e1f4-client-ca-bundle\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.872679 master-0 kubenswrapper[38936]: I0216 21:25:10.872405 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-f886f46f4-gz92q"] Feb 16 21:25:10.892677 master-0 kubenswrapper[38936]: I0216 21:25:10.876557 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-77f5595c8c-8jsq7"] Feb 16 21:25:10.905832 master-0 kubenswrapper[38936]: I0216 21:25:10.899721 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-57ddf7d868-wm6cg"] Feb 16 21:25:10.914685 master-0 kubenswrapper[38936]: I0216 21:25:10.906342 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 21:25:10.922964 master-0 kubenswrapper[38936]: I0216 21:25:10.919992 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926336 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de36187a-e7bd-445a-ba5e-3fcff71d0175-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926388 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-web-config\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926408 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkkbl\" (UniqueName: \"kubernetes.io/projected/de36187a-e7bd-445a-ba5e-3fcff71d0175-kube-api-access-nkkbl\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926424 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a89f49f8-e2cb-40ae-b447-12aee110e1f4-audit-log\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926444 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-config-volume\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926458 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/788d5882-22df-4b55-ae2f-4a92fba7e889-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926478 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926494 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-grpc-tls\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926514 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926529 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926550 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-secret-telemeter-client\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926571 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926588 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a89f49f8-e2cb-40ae-b447-12aee110e1f4-secret-metrics-server-tls\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926603 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926618 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926633 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926665 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkvs\" (UniqueName: \"kubernetes.io/projected/9b54b7c9-1b32-458b-b231-7e64b91a1a93-kube-api-access-rkkvs\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926699 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b54b7c9-1b32-458b-b231-7e64b91a1a93-metrics-client-ca\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926718 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926733 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/329b8c10-cfb4-49bc-ac25-7b4c724afa31-metrics-client-ca\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926751 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926772 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a89f49f8-e2cb-40ae-b447-12aee110e1f4-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926796 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dmr7\" (UniqueName: \"kubernetes.io/projected/788d5882-22df-4b55-ae2f-4a92fba7e889-kube-api-access-9dmr7\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926816 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-web-config\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926839 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89f49f8-e2cb-40ae-b447-12aee110e1f4-client-ca-bundle\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926859 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de36187a-e7bd-445a-ba5e-3fcff71d0175-config-out\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926874 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926895 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b54b7c9-1b32-458b-b231-7e64b91a1a93-telemeter-trusted-ca-bundle\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926919 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/788d5882-22df-4b55-ae2f-4a92fba7e889-config-out\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926942 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b54b7c9-1b32-458b-b231-7e64b91a1a93-serving-certs-ca-bundle\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926961 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926976 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/788d5882-22df-4b55-ae2f-4a92fba7e889-tls-assets\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.926996 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grvhp\" (UniqueName: \"kubernetes.io/projected/a89f49f8-e2cb-40ae-b447-12aee110e1f4-kube-api-access-grvhp\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.927068 master-0 kubenswrapper[38936]: I0216 21:25:10.927013 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.928581 master-0 kubenswrapper[38936]: I0216 21:25:10.928075 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a89f49f8-e2cb-40ae-b447-12aee110e1f4-audit-log\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.930759 master-0 kubenswrapper[38936]: I0216 21:25:10.930714 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.930759 master-0 kubenswrapper[38936]: I0216 21:25:10.930734 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b54b7c9-1b32-458b-b231-7e64b91a1a93-telemeter-trusted-ca-bundle\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.931047 master-0 kubenswrapper[38936]: I0216 21:25:10.931004 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.931340 master-0 kubenswrapper[38936]: I0216 21:25:10.931303 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.931480 master-0 kubenswrapper[38936]: I0216 21:25:10.931437 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/788d5882-22df-4b55-ae2f-4a92fba7e889-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.931884 master-0 kubenswrapper[38936]: I0216 21:25:10.931847 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a89f49f8-e2cb-40ae-b447-12aee110e1f4-secret-metrics-client-certs\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.931965 master-0 kubenswrapper[38936]: I0216 21:25:10.931913 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-telemeter-client-tls\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.931965 master-0 kubenswrapper[38936]: I0216 21:25:10.931951 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.932075 master-0 kubenswrapper[38936]: I0216 21:25:10.931986 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.932075 master-0 kubenswrapper[38936]: I0216 21:25:10.932033 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a89f49f8-e2cb-40ae-b447-12aee110e1f4-metrics-server-audit-profiles\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.932180 master-0 kubenswrapper[38936]: I0216 21:25:10.932078 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/788d5882-22df-4b55-ae2f-4a92fba7e889-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.932180 master-0 kubenswrapper[38936]: I0216 21:25:10.932115 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.932180 master-0 kubenswrapper[38936]: I0216 21:25:10.932143 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/de36187a-e7bd-445a-ba5e-3fcff71d0175-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.932180 master-0 kubenswrapper[38936]: I0216 21:25:10.932169 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.932361 master-0 kubenswrapper[38936]: I0216 21:25:10.932191 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-federate-client-tls\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.932361 master-0 kubenswrapper[38936]: I0216 21:25:10.932241 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvlzc\" (UniqueName: \"kubernetes.io/projected/329b8c10-cfb4-49bc-ac25-7b4c724afa31-kube-api-access-cvlzc\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.932361 master-0 kubenswrapper[38936]: I0216 21:25:10.932283 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.932361 master-0 kubenswrapper[38936]: I0216 21:25:10.932286 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-grpc-tls\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.932361 master-0 kubenswrapper[38936]: I0216 21:25:10.932317 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.932361 master-0 kubenswrapper[38936]: I0216 21:25:10.932355 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.932633 master-0 kubenswrapper[38936]: I0216 21:25:10.932364 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/de36187a-e7bd-445a-ba5e-3fcff71d0175-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.932633 master-0 kubenswrapper[38936]: I0216 21:25:10.932384 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/788d5882-22df-4b55-ae2f-4a92fba7e889-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.932633 master-0 kubenswrapper[38936]: I0216 21:25:10.932423 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.932633 master-0 kubenswrapper[38936]: I0216 21:25:10.932449 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.932633 master-0 kubenswrapper[38936]: I0216 21:25:10.932485 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-tls\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.932633 master-0 kubenswrapper[38936]: I0216 21:25:10.932509 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-config\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.932633 master-0 kubenswrapper[38936]: I0216 21:25:10.932506 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b54b7c9-1b32-458b-b231-7e64b91a1a93-serving-certs-ca-bundle\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.932995 master-0 kubenswrapper[38936]: I0216 21:25:10.932758 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.943425 master-0 kubenswrapper[38936]: I0216 21:25:10.943317 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a89f49f8-e2cb-40ae-b447-12aee110e1f4-secret-metrics-server-tls\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.943923 master-0 kubenswrapper[38936]: I0216 21:25:10.943865 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.944635 master-0 kubenswrapper[38936]: I0216 21:25:10.944576 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/329b8c10-cfb4-49bc-ac25-7b4c724afa31-metrics-client-ca\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.945670 master-0 kubenswrapper[38936]: I0216 21:25:10.945620 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a89f49f8-e2cb-40ae-b447-12aee110e1f4-secret-metrics-client-certs\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.947408 master-0 kubenswrapper[38936]: I0216 21:25:10.947372 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-tls\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.947705 master-0 kubenswrapper[38936]: I0216 21:25:10.947661 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89f49f8-e2cb-40ae-b447-12aee110e1f4-client-ca-bundle\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.948792 master-0 kubenswrapper[38936]: I0216 21:25:10.948762 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.952644 master-0 kubenswrapper[38936]: I0216 21:25:10.952604 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/329b8c10-cfb4-49bc-ac25-7b4c724afa31-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:10.955279 master-0 kubenswrapper[38936]: I0216 21:25:10.955244 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.955902 master-0 kubenswrapper[38936]: I0216 21:25:10.955867 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-web-config\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.955902 master-0 kubenswrapper[38936]: I0216 21:25:10.955878 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.956381 master-0 kubenswrapper[38936]: I0216 21:25:10.956350 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.956752 master-0 kubenswrapper[38936]: I0216 21:25:10.956721 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/788d5882-22df-4b55-ae2f-4a92fba7e889-config-out\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.956843 master-0 kubenswrapper[38936]: I0216 21:25:10.956821 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.957060 master-0 kubenswrapper[38936]: I0216 21:25:10.957035 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/788d5882-22df-4b55-ae2f-4a92fba7e889-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.957204 master-0 kubenswrapper[38936]: I0216 21:25:10.957174 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/788d5882-22df-4b55-ae2f-4a92fba7e889-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.957254 master-0 kubenswrapper[38936]: I0216 21:25:10.957186 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-config-volume\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.957352 master-0 kubenswrapper[38936]: I0216 21:25:10.957329 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/de36187a-e7bd-445a-ba5e-3fcff71d0175-config-out\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.957613 master-0 kubenswrapper[38936]: I0216 21:25:10.957592 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/de36187a-e7bd-445a-ba5e-3fcff71d0175-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.957742 master-0 kubenswrapper[38936]: I0216 21:25:10.957717 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.957794 master-0 kubenswrapper[38936]: I0216 21:25:10.957759 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.958003 master-0 kubenswrapper[38936]: I0216 21:25:10.957972 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.958138 master-0 kubenswrapper[38936]: I0216 21:25:10.958111 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.958364 master-0 kubenswrapper[38936]: I0216 21:25:10.958337 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9b54b7c9-1b32-458b-b231-7e64b91a1a93-metrics-client-ca\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.958594 master-0 kubenswrapper[38936]: I0216 21:25:10.958568 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-federate-client-tls\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.958882 master-0 kubenswrapper[38936]: I0216 21:25:10.958787 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.958882 master-0 kubenswrapper[38936]: I0216 21:25:10.958823 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-telemeter-client-tls\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.958882 master-0 kubenswrapper[38936]: I0216 21:25:10.958837 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a89f49f8-e2cb-40ae-b447-12aee110e1f4-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.959228 master-0 kubenswrapper[38936]: I0216 21:25:10.959202 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/9b54b7c9-1b32-458b-b231-7e64b91a1a93-secret-telemeter-client\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:10.959744 master-0 kubenswrapper[38936]: I0216 21:25:10.959708 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/788d5882-22df-4b55-ae2f-4a92fba7e889-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.959862 master-0 kubenswrapper[38936]: I0216 21:25:10.959831 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-config\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.959979 master-0 kubenswrapper[38936]: I0216 21:25:10.959932 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.960025 master-0 kubenswrapper[38936]: I0216 21:25:10.960000 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.960074 master-0 kubenswrapper[38936]: I0216 21:25:10.960056 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/788d5882-22df-4b55-ae2f-4a92fba7e889-tls-assets\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:10.960504 master-0 kubenswrapper[38936]: I0216 21:25:10.960472 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.960553 master-0 kubenswrapper[38936]: I0216 21:25:10.960515 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a89f49f8-e2cb-40ae-b447-12aee110e1f4-metrics-server-audit-profiles\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:10.960884 master-0 kubenswrapper[38936]: I0216 21:25:10.960855 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/de36187a-e7bd-445a-ba5e-3fcff71d0175-web-config\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:10.964859 master-0 kubenswrapper[38936]: I0216 21:25:10.964822 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/de36187a-e7bd-445a-ba5e-3fcff71d0175-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:11.092896 master-0 kubenswrapper[38936]: I0216 21:25:11.092830 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkkbl\" (UniqueName: \"kubernetes.io/projected/de36187a-e7bd-445a-ba5e-3fcff71d0175-kube-api-access-nkkbl\") pod \"prometheus-k8s-0\" (UID: \"de36187a-e7bd-445a-ba5e-3fcff71d0175\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:11.115539 master-0 kubenswrapper[38936]: I0216 21:25:11.115466 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24"] Feb 16 21:25:11.115893 master-0 kubenswrapper[38936]: I0216 21:25:11.115799 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" podUID="1489d1b6-d8a1-453a-bff3-8adfd4335903" containerName="route-controller-manager" containerID="cri-o://871f46e938656ef846c5525d2292afdd15ba15225bc063c38e05de3503244dc1" gracePeriod=30 Feb 16 21:25:11.117748 master-0 kubenswrapper[38936]: I0216 21:25:11.117302 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grvhp\" (UniqueName: \"kubernetes.io/projected/a89f49f8-e2cb-40ae-b447-12aee110e1f4-kube-api-access-grvhp\") pod \"metrics-server-57ddf7d868-wm6cg\" (UID: \"a89f49f8-e2cb-40ae-b447-12aee110e1f4\") " pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:11.117928 master-0 kubenswrapper[38936]: I0216 21:25:11.117828 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dmr7\" (UniqueName: \"kubernetes.io/projected/788d5882-22df-4b55-ae2f-4a92fba7e889-kube-api-access-9dmr7\") pod \"alertmanager-main-0\" (UID: \"788d5882-22df-4b55-ae2f-4a92fba7e889\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:11.119296 master-0 kubenswrapper[38936]: I0216 21:25:11.118839 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-5c88849d7d-xfnmp"] Feb 16 21:25:11.120416 master-0 kubenswrapper[38936]: I0216 21:25:11.120305 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkvs\" (UniqueName: \"kubernetes.io/projected/9b54b7c9-1b32-458b-b231-7e64b91a1a93-kube-api-access-rkkvs\") pod \"telemeter-client-77f5595c8c-8jsq7\" (UID: \"9b54b7c9-1b32-458b-b231-7e64b91a1a93\") " pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:11.123784 master-0 kubenswrapper[38936]: I0216 21:25:11.123721 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvlzc\" (UniqueName: \"kubernetes.io/projected/329b8c10-cfb4-49bc-ac25-7b4c724afa31-kube-api-access-cvlzc\") pod \"thanos-querier-f886f46f4-gz92q\" (UID: \"329b8c10-cfb4-49bc-ac25-7b4c724afa31\") " pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:11.317905 master-0 kubenswrapper[38936]: I0216 21:25:11.317833 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:11.342568 master-0 kubenswrapper[38936]: I0216 21:25:11.342521 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:11.359893 master-0 kubenswrapper[38936]: I0216 21:25:11.359843 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" Feb 16 21:25:11.373822 master-0 kubenswrapper[38936]: I0216 21:25:11.371238 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 21:25:11.396949 master-0 kubenswrapper[38936]: I0216 21:25:11.396887 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:11.531946 master-0 kubenswrapper[38936]: I0216 21:25:11.531795 38936 generic.go:334] "Generic (PLEG): container finished" podID="408a9364-3730-4017-b1e4-c85d6a504168" containerID="998c9ae589b8ae43e110fa0bf1929dd53f4179a605ee219bd9e74970ce1b2465" exitCode=0 Feb 16 21:25:11.531946 master-0 kubenswrapper[38936]: I0216 21:25:11.531853 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" event={"ID":"408a9364-3730-4017-b1e4-c85d6a504168","Type":"ContainerDied","Data":"998c9ae589b8ae43e110fa0bf1929dd53f4179a605ee219bd9e74970ce1b2465"} Feb 16 21:25:11.532472 master-0 kubenswrapper[38936]: I0216 21:25:11.532445 38936 scope.go:117] "RemoveContainer" containerID="ec8ce2b77f9d3d1712f1d9e5d59ca2196200eb54635d01b0d1caf94494809751" Feb 16 21:25:11.535697 master-0 kubenswrapper[38936]: I0216 21:25:11.535483 38936 generic.go:334] "Generic (PLEG): container finished" podID="1489d1b6-d8a1-453a-bff3-8adfd4335903" containerID="871f46e938656ef846c5525d2292afdd15ba15225bc063c38e05de3503244dc1" exitCode=0 Feb 16 21:25:11.535697 master-0 kubenswrapper[38936]: I0216 21:25:11.535638 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" event={"ID":"1489d1b6-d8a1-453a-bff3-8adfd4335903","Type":"ContainerDied","Data":"871f46e938656ef846c5525d2292afdd15ba15225bc063c38e05de3503244dc1"} Feb 16 21:25:11.626918 master-0 kubenswrapper[38936]: I0216 21:25:11.622658 38936 scope.go:117] "RemoveContainer" containerID="25ee620a91a11cdfcf10f317458e9833777a7250c9af0cd0962ed366c5d07a92" Feb 16 21:25:11.788443 master-0 kubenswrapper[38936]: I0216 21:25:11.788344 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:25:11.951153 master-0 kubenswrapper[38936]: I0216 21:25:11.951086 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config\") pod \"1489d1b6-d8a1-453a-bff3-8adfd4335903\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " Feb 16 21:25:11.951412 master-0 kubenswrapper[38936]: I0216 21:25:11.951268 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert\") pod \"1489d1b6-d8a1-453a-bff3-8adfd4335903\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " Feb 16 21:25:11.951412 master-0 kubenswrapper[38936]: I0216 21:25:11.951338 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc47v\" (UniqueName: \"kubernetes.io/projected/1489d1b6-d8a1-453a-bff3-8adfd4335903-kube-api-access-xc47v\") pod \"1489d1b6-d8a1-453a-bff3-8adfd4335903\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " Feb 16 21:25:11.951412 master-0 kubenswrapper[38936]: I0216 21:25:11.951380 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca\") pod \"1489d1b6-d8a1-453a-bff3-8adfd4335903\" (UID: \"1489d1b6-d8a1-453a-bff3-8adfd4335903\") " Feb 16 21:25:11.952091 master-0 kubenswrapper[38936]: I0216 21:25:11.952055 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config" (OuterVolumeSpecName: "config") pod "1489d1b6-d8a1-453a-bff3-8adfd4335903" (UID: "1489d1b6-d8a1-453a-bff3-8adfd4335903"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:25:11.952147 master-0 kubenswrapper[38936]: I0216 21:25:11.952050 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca" (OuterVolumeSpecName: "client-ca") pod "1489d1b6-d8a1-453a-bff3-8adfd4335903" (UID: "1489d1b6-d8a1-453a-bff3-8adfd4335903"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:25:11.955074 master-0 kubenswrapper[38936]: I0216 21:25:11.955028 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1489d1b6-d8a1-453a-bff3-8adfd4335903" (UID: "1489d1b6-d8a1-453a-bff3-8adfd4335903"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:25:11.956364 master-0 kubenswrapper[38936]: I0216 21:25:11.956321 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1489d1b6-d8a1-453a-bff3-8adfd4335903-kube-api-access-xc47v" (OuterVolumeSpecName: "kube-api-access-xc47v") pod "1489d1b6-d8a1-453a-bff3-8adfd4335903" (UID: "1489d1b6-d8a1-453a-bff3-8adfd4335903"). InnerVolumeSpecName "kube-api-access-xc47v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:25:12.060302 master-0 kubenswrapper[38936]: I0216 21:25:12.055850 38936 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1489d1b6-d8a1-453a-bff3-8adfd4335903-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:12.060302 master-0 kubenswrapper[38936]: I0216 21:25:12.055899 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc47v\" (UniqueName: \"kubernetes.io/projected/1489d1b6-d8a1-453a-bff3-8adfd4335903-kube-api-access-xc47v\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:12.060302 master-0 kubenswrapper[38936]: I0216 21:25:12.055912 38936 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:12.060302 master-0 kubenswrapper[38936]: I0216 21:25:12.055921 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1489d1b6-d8a1-453a-bff3-8adfd4335903-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:12.241235 master-0 kubenswrapper[38936]: I0216 21:25:12.241169 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-57ddf7d868-wm6cg"] Feb 16 21:25:12.266063 master-0 kubenswrapper[38936]: I0216 21:25:12.262780 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-77f5595c8c-8jsq7"] Feb 16 21:25:12.272211 master-0 kubenswrapper[38936]: I0216 21:25:12.272151 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-f886f46f4-gz92q"] Feb 16 21:25:12.274470 master-0 kubenswrapper[38936]: W0216 21:25:12.274322 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b54b7c9_1b32_458b_b231_7e64b91a1a93.slice/crio-81d5e0fbcd17701dfceafa13ea151d6d2f9c70b33b38ea74e526fba91fa3d650 WatchSource:0}: Error finding container 81d5e0fbcd17701dfceafa13ea151d6d2f9c70b33b38ea74e526fba91fa3d650: Status 404 returned error can't find the container with id 81d5e0fbcd17701dfceafa13ea151d6d2f9c70b33b38ea74e526fba91fa3d650 Feb 16 21:25:12.277914 master-0 kubenswrapper[38936]: I0216 21:25:12.277858 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 21:25:12.298901 master-0 kubenswrapper[38936]: W0216 21:25:12.298833 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod788d5882_22df_4b55_ae2f_4a92fba7e889.slice/crio-0684040c2c62d68949505c623b8615fa542d64c17c17e8cddf02aad418227363 WatchSource:0}: Error finding container 0684040c2c62d68949505c623b8615fa542d64c17c17e8cddf02aad418227363: Status 404 returned error can't find the container with id 0684040c2c62d68949505c623b8615fa542d64c17c17e8cddf02aad418227363 Feb 16 21:25:12.320991 master-0 kubenswrapper[38936]: I0216 21:25:12.320946 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:25:12.321137 master-0 kubenswrapper[38936]: I0216 21:25:12.321026 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:25:12.366938 master-0 kubenswrapper[38936]: I0216 21:25:12.366792 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 21:25:12.384393 master-0 kubenswrapper[38936]: W0216 21:25:12.384329 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde36187a_e7bd_445a_ba5e_3fcff71d0175.slice/crio-7b2c0f5baac8cd1e0e6b534f403f33163e799f2d9a2a71369a12e0d31ffb62f0 WatchSource:0}: Error finding container 7b2c0f5baac8cd1e0e6b534f403f33163e799f2d9a2a71369a12e0d31ffb62f0: Status 404 returned error can't find the container with id 7b2c0f5baac8cd1e0e6b534f403f33163e799f2d9a2a71369a12e0d31ffb62f0 Feb 16 21:25:12.416887 master-0 kubenswrapper[38936]: I0216 21:25:12.416803 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:25:12.544306 master-0 kubenswrapper[38936]: I0216 21:25:12.544216 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"788d5882-22df-4b55-ae2f-4a92fba7e889","Type":"ContainerStarted","Data":"0684040c2c62d68949505c623b8615fa542d64c17c17e8cddf02aad418227363"} Feb 16 21:25:12.545578 master-0 kubenswrapper[38936]: I0216 21:25:12.545540 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" event={"ID":"9b54b7c9-1b32-458b-b231-7e64b91a1a93","Type":"ContainerStarted","Data":"81d5e0fbcd17701dfceafa13ea151d6d2f9c70b33b38ea74e526fba91fa3d650"} Feb 16 21:25:12.546930 master-0 kubenswrapper[38936]: I0216 21:25:12.546894 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" event={"ID":"329b8c10-cfb4-49bc-ac25-7b4c724afa31","Type":"ContainerStarted","Data":"281d2ca9574743d773500d753bbf1736985ff9aa85d53e2d08cddcf13e814ec6"} Feb 16 21:25:12.548356 master-0 kubenswrapper[38936]: I0216 21:25:12.548321 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" event={"ID":"a89f49f8-e2cb-40ae-b447-12aee110e1f4","Type":"ContainerStarted","Data":"f781e566f79087c3b93eb412026da3763f66763d965dd882b84bb9b043cca26d"} Feb 16 21:25:12.548356 master-0 kubenswrapper[38936]: I0216 21:25:12.548352 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" event={"ID":"a89f49f8-e2cb-40ae-b447-12aee110e1f4","Type":"ContainerStarted","Data":"f4c013ea3662d10188cd40457237bc137a94e56b37fa124cb699a42c97ca7987"} Feb 16 21:25:12.550068 master-0 kubenswrapper[38936]: I0216 21:25:12.549614 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" Feb 16 21:25:12.550159 master-0 kubenswrapper[38936]: I0216 21:25:12.550108 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24" event={"ID":"1489d1b6-d8a1-453a-bff3-8adfd4335903","Type":"ContainerDied","Data":"d3122711a170f449cbae155070984deb894c3febeb5926b33f03b31158614e34"} Feb 16 21:25:12.550202 master-0 kubenswrapper[38936]: I0216 21:25:12.550165 38936 scope.go:117] "RemoveContainer" containerID="871f46e938656ef846c5525d2292afdd15ba15225bc063c38e05de3503244dc1" Feb 16 21:25:12.551621 master-0 kubenswrapper[38936]: I0216 21:25:12.551605 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"de36187a-e7bd-445a-ba5e-3fcff71d0175","Type":"ContainerStarted","Data":"7b2c0f5baac8cd1e0e6b534f403f33163e799f2d9a2a71369a12e0d31ffb62f0"} Feb 16 21:25:12.553633 master-0 kubenswrapper[38936]: I0216 21:25:12.553111 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" event={"ID":"408a9364-3730-4017-b1e4-c85d6a504168","Type":"ContainerDied","Data":"f6ba9fbde2ec0f2099ab53176d9410c4bf53a78507ca46eeb7e91c2f36c118ed"} Feb 16 21:25:12.553633 master-0 kubenswrapper[38936]: I0216 21:25:12.553160 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6998cd96fb-bgcb2" Feb 16 21:25:12.566812 master-0 kubenswrapper[38936]: I0216 21:25:12.566765 38936 scope.go:117] "RemoveContainer" containerID="998c9ae589b8ae43e110fa0bf1929dd53f4179a605ee219bd9e74970ce1b2465" Feb 16 21:25:12.571731 master-0 kubenswrapper[38936]: I0216 21:25:12.571690 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca\") pod \"408a9364-3730-4017-b1e4-c85d6a504168\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " Feb 16 21:25:12.571844 master-0 kubenswrapper[38936]: I0216 21:25:12.571792 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles\") pod \"408a9364-3730-4017-b1e4-c85d6a504168\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " Feb 16 21:25:12.571904 master-0 kubenswrapper[38936]: I0216 21:25:12.571882 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config\") pod \"408a9364-3730-4017-b1e4-c85d6a504168\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " Feb 16 21:25:12.571982 master-0 kubenswrapper[38936]: I0216 21:25:12.571946 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvw2m\" (UniqueName: \"kubernetes.io/projected/408a9364-3730-4017-b1e4-c85d6a504168-kube-api-access-lvw2m\") pod \"408a9364-3730-4017-b1e4-c85d6a504168\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " Feb 16 21:25:12.572026 master-0 kubenswrapper[38936]: I0216 21:25:12.572013 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert\") pod \"408a9364-3730-4017-b1e4-c85d6a504168\" (UID: \"408a9364-3730-4017-b1e4-c85d6a504168\") " Feb 16 21:25:12.572159 master-0 kubenswrapper[38936]: I0216 21:25:12.572118 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca" (OuterVolumeSpecName: "client-ca") pod "408a9364-3730-4017-b1e4-c85d6a504168" (UID: "408a9364-3730-4017-b1e4-c85d6a504168"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:25:12.572461 master-0 kubenswrapper[38936]: I0216 21:25:12.572423 38936 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:12.572625 master-0 kubenswrapper[38936]: I0216 21:25:12.572575 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config" (OuterVolumeSpecName: "config") pod "408a9364-3730-4017-b1e4-c85d6a504168" (UID: "408a9364-3730-4017-b1e4-c85d6a504168"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:25:12.572957 master-0 kubenswrapper[38936]: I0216 21:25:12.572889 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "408a9364-3730-4017-b1e4-c85d6a504168" (UID: "408a9364-3730-4017-b1e4-c85d6a504168"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:25:12.575143 master-0 kubenswrapper[38936]: I0216 21:25:12.575109 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "408a9364-3730-4017-b1e4-c85d6a504168" (UID: "408a9364-3730-4017-b1e4-c85d6a504168"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:25:12.575607 master-0 kubenswrapper[38936]: I0216 21:25:12.575578 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408a9364-3730-4017-b1e4-c85d6a504168-kube-api-access-lvw2m" (OuterVolumeSpecName: "kube-api-access-lvw2m") pod "408a9364-3730-4017-b1e4-c85d6a504168" (UID: "408a9364-3730-4017-b1e4-c85d6a504168"). InnerVolumeSpecName "kube-api-access-lvw2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:25:12.593561 master-0 kubenswrapper[38936]: I0216 21:25:12.592187 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-767b668bb8-vflj5"] Feb 16 21:25:12.593561 master-0 kubenswrapper[38936]: E0216 21:25:12.592854 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="408a9364-3730-4017-b1e4-c85d6a504168" containerName="controller-manager" Feb 16 21:25:12.593561 master-0 kubenswrapper[38936]: I0216 21:25:12.592875 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="408a9364-3730-4017-b1e4-c85d6a504168" containerName="controller-manager" Feb 16 21:25:12.593561 master-0 kubenswrapper[38936]: E0216 21:25:12.592894 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1489d1b6-d8a1-453a-bff3-8adfd4335903" containerName="route-controller-manager" Feb 16 21:25:12.593561 master-0 kubenswrapper[38936]: I0216 21:25:12.592936 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="1489d1b6-d8a1-453a-bff3-8adfd4335903" containerName="route-controller-manager" Feb 16 21:25:12.593561 master-0 kubenswrapper[38936]: I0216 21:25:12.593197 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="408a9364-3730-4017-b1e4-c85d6a504168" containerName="controller-manager" Feb 16 21:25:12.593561 master-0 kubenswrapper[38936]: I0216 21:25:12.593255 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="1489d1b6-d8a1-453a-bff3-8adfd4335903" containerName="route-controller-manager" Feb 16 21:25:12.596021 master-0 kubenswrapper[38936]: I0216 21:25:12.595292 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.602667 master-0 kubenswrapper[38936]: I0216 21:25:12.599804 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb"] Feb 16 21:25:12.602667 master-0 kubenswrapper[38936]: E0216 21:25:12.600092 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1489d1b6-d8a1-453a-bff3-8adfd4335903" containerName="route-controller-manager" Feb 16 21:25:12.602667 master-0 kubenswrapper[38936]: I0216 21:25:12.600104 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="1489d1b6-d8a1-453a-bff3-8adfd4335903" containerName="route-controller-manager" Feb 16 21:25:12.602667 master-0 kubenswrapper[38936]: E0216 21:25:12.600151 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="408a9364-3730-4017-b1e4-c85d6a504168" containerName="controller-manager" Feb 16 21:25:12.602667 master-0 kubenswrapper[38936]: I0216 21:25:12.600157 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="408a9364-3730-4017-b1e4-c85d6a504168" containerName="controller-manager" Feb 16 21:25:12.602667 master-0 kubenswrapper[38936]: I0216 21:25:12.600288 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="408a9364-3730-4017-b1e4-c85d6a504168" containerName="controller-manager" Feb 16 21:25:12.602667 master-0 kubenswrapper[38936]: I0216 21:25:12.600315 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="1489d1b6-d8a1-453a-bff3-8adfd4335903" containerName="route-controller-manager" Feb 16 21:25:12.602667 master-0 kubenswrapper[38936]: I0216 21:25:12.600785 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.603453 master-0 kubenswrapper[38936]: I0216 21:25:12.603400 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-vvh6n" Feb 16 21:25:12.603727 master-0 kubenswrapper[38936]: I0216 21:25:12.603501 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:25:12.611721 master-0 kubenswrapper[38936]: I0216 21:25:12.604477 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:25:12.611721 master-0 kubenswrapper[38936]: I0216 21:25:12.604715 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:25:12.611721 master-0 kubenswrapper[38936]: I0216 21:25:12.604945 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:25:12.611721 master-0 kubenswrapper[38936]: I0216 21:25:12.609013 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:25:12.678422 master-0 kubenswrapper[38936]: I0216 21:25:12.677198 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:12.678422 master-0 kubenswrapper[38936]: I0216 21:25:12.677245 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvw2m\" (UniqueName: \"kubernetes.io/projected/408a9364-3730-4017-b1e4-c85d6a504168-kube-api-access-lvw2m\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:12.678422 master-0 kubenswrapper[38936]: I0216 21:25:12.677259 38936 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/408a9364-3730-4017-b1e4-c85d6a504168-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:12.678422 master-0 kubenswrapper[38936]: I0216 21:25:12.677272 38936 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/408a9364-3730-4017-b1e4-c85d6a504168-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:12.685528 master-0 kubenswrapper[38936]: I0216 21:25:12.685059 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb"] Feb 16 21:25:12.688439 master-0 kubenswrapper[38936]: I0216 21:25:12.688383 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-767b668bb8-vflj5"] Feb 16 21:25:12.778212 master-0 kubenswrapper[38936]: I0216 21:25:12.778085 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21b01cc5-f6a8-4fc3-bb22-c48928697987-client-ca\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.778426 master-0 kubenswrapper[38936]: I0216 21:25:12.778252 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8abfc87-0fa3-4b75-8676-50d933c39580-serving-cert\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.778426 master-0 kubenswrapper[38936]: I0216 21:25:12.778301 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8abfc87-0fa3-4b75-8676-50d933c39580-client-ca\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.778499 master-0 kubenswrapper[38936]: I0216 21:25:12.778463 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8abfc87-0fa3-4b75-8676-50d933c39580-proxy-ca-bundles\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.778678 master-0 kubenswrapper[38936]: I0216 21:25:12.778638 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b01cc5-f6a8-4fc3-bb22-c48928697987-serving-cert\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.778828 master-0 kubenswrapper[38936]: I0216 21:25:12.778798 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b01cc5-f6a8-4fc3-bb22-c48928697987-config\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.778884 master-0 kubenswrapper[38936]: I0216 21:25:12.778841 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7f2x\" (UniqueName: \"kubernetes.io/projected/d8abfc87-0fa3-4b75-8676-50d933c39580-kube-api-access-q7f2x\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.778884 master-0 kubenswrapper[38936]: I0216 21:25:12.778862 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txzg4\" (UniqueName: \"kubernetes.io/projected/21b01cc5-f6a8-4fc3-bb22-c48928697987-kube-api-access-txzg4\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.778995 master-0 kubenswrapper[38936]: I0216 21:25:12.778971 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8abfc87-0fa3-4b75-8676-50d933c39580-config\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.844313 master-0 kubenswrapper[38936]: I0216 21:25:12.844239 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24"] Feb 16 21:25:12.857328 master-0 kubenswrapper[38936]: I0216 21:25:12.857256 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24"] Feb 16 21:25:12.880600 master-0 kubenswrapper[38936]: I0216 21:25:12.880519 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8abfc87-0fa3-4b75-8676-50d933c39580-serving-cert\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.880600 master-0 kubenswrapper[38936]: I0216 21:25:12.880595 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8abfc87-0fa3-4b75-8676-50d933c39580-client-ca\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.880910 master-0 kubenswrapper[38936]: I0216 21:25:12.880857 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8abfc87-0fa3-4b75-8676-50d933c39580-proxy-ca-bundles\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.881166 master-0 kubenswrapper[38936]: I0216 21:25:12.881138 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b01cc5-f6a8-4fc3-bb22-c48928697987-serving-cert\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.881316 master-0 kubenswrapper[38936]: I0216 21:25:12.881298 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b01cc5-f6a8-4fc3-bb22-c48928697987-config\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.881506 master-0 kubenswrapper[38936]: I0216 21:25:12.881338 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7f2x\" (UniqueName: \"kubernetes.io/projected/d8abfc87-0fa3-4b75-8676-50d933c39580-kube-api-access-q7f2x\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.881506 master-0 kubenswrapper[38936]: I0216 21:25:12.881365 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txzg4\" (UniqueName: \"kubernetes.io/projected/21b01cc5-f6a8-4fc3-bb22-c48928697987-kube-api-access-txzg4\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.881506 master-0 kubenswrapper[38936]: I0216 21:25:12.881410 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8abfc87-0fa3-4b75-8676-50d933c39580-config\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.881506 master-0 kubenswrapper[38936]: I0216 21:25:12.881439 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21b01cc5-f6a8-4fc3-bb22-c48928697987-client-ca\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.883256 master-0 kubenswrapper[38936]: I0216 21:25:12.883220 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/21b01cc5-f6a8-4fc3-bb22-c48928697987-client-ca\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.883932 master-0 kubenswrapper[38936]: I0216 21:25:12.883887 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8abfc87-0fa3-4b75-8676-50d933c39580-config\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.884470 master-0 kubenswrapper[38936]: I0216 21:25:12.884430 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b01cc5-f6a8-4fc3-bb22-c48928697987-config\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.885142 master-0 kubenswrapper[38936]: I0216 21:25:12.885089 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8abfc87-0fa3-4b75-8676-50d933c39580-proxy-ca-bundles\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.885583 master-0 kubenswrapper[38936]: I0216 21:25:12.885537 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8abfc87-0fa3-4b75-8676-50d933c39580-client-ca\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.886146 master-0 kubenswrapper[38936]: I0216 21:25:12.886109 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8abfc87-0fa3-4b75-8676-50d933c39580-serving-cert\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.886307 master-0 kubenswrapper[38936]: I0216 21:25:12.886269 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b01cc5-f6a8-4fc3-bb22-c48928697987-serving-cert\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.916231 master-0 kubenswrapper[38936]: I0216 21:25:12.916190 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7f2x\" (UniqueName: \"kubernetes.io/projected/d8abfc87-0fa3-4b75-8676-50d933c39580-kube-api-access-q7f2x\") pod \"controller-manager-767b668bb8-vflj5\" (UID: \"d8abfc87-0fa3-4b75-8676-50d933c39580\") " pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.916520 master-0 kubenswrapper[38936]: I0216 21:25:12.916254 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txzg4\" (UniqueName: \"kubernetes.io/projected/21b01cc5-f6a8-4fc3-bb22-c48928697987-kube-api-access-txzg4\") pod \"route-controller-manager-b4758c6d4-lhfjb\" (UID: \"21b01cc5-f6a8-4fc3-bb22-c48928697987\") " pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.924366 master-0 kubenswrapper[38936]: I0216 21:25:12.924290 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:12.940214 master-0 kubenswrapper[38936]: I0216 21:25:12.940132 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:12.945730 master-0 kubenswrapper[38936]: I0216 21:25:12.945690 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6998cd96fb-bgcb2"] Feb 16 21:25:13.124543 master-0 kubenswrapper[38936]: I0216 21:25:13.121993 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6998cd96fb-bgcb2"] Feb 16 21:25:13.383955 master-0 kubenswrapper[38936]: I0216 21:25:13.383810 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-767b668bb8-vflj5"] Feb 16 21:25:13.405154 master-0 kubenswrapper[38936]: W0216 21:25:13.405089 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8abfc87_0fa3_4b75_8676_50d933c39580.slice/crio-39446af47dc678a192da75f2c1b59b3f4f952005d5ed1737a7d936fe9da956dc WatchSource:0}: Error finding container 39446af47dc678a192da75f2c1b59b3f4f952005d5ed1737a7d936fe9da956dc: Status 404 returned error can't find the container with id 39446af47dc678a192da75f2c1b59b3f4f952005d5ed1737a7d936fe9da956dc Feb 16 21:25:13.449464 master-0 kubenswrapper[38936]: I0216 21:25:13.449384 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb"] Feb 16 21:25:13.466140 master-0 kubenswrapper[38936]: W0216 21:25:13.466072 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21b01cc5_f6a8_4fc3_bb22_c48928697987.slice/crio-798f9cb0973cd91bdfeba39962818d93a5dd9ac9be504c27d3946e3070cb0294 WatchSource:0}: Error finding container 798f9cb0973cd91bdfeba39962818d93a5dd9ac9be504c27d3946e3070cb0294: Status 404 returned error can't find the container with id 798f9cb0973cd91bdfeba39962818d93a5dd9ac9be504c27d3946e3070cb0294 Feb 16 21:25:13.570321 master-0 kubenswrapper[38936]: I0216 21:25:13.570267 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" event={"ID":"d8abfc87-0fa3-4b75-8676-50d933c39580","Type":"ContainerStarted","Data":"39446af47dc678a192da75f2c1b59b3f4f952005d5ed1737a7d936fe9da956dc"} Feb 16 21:25:13.572676 master-0 kubenswrapper[38936]: I0216 21:25:13.572606 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" event={"ID":"21b01cc5-f6a8-4fc3-bb22-c48928697987","Type":"ContainerStarted","Data":"798f9cb0973cd91bdfeba39962818d93a5dd9ac9be504c27d3946e3070cb0294"} Feb 16 21:25:13.598923 master-0 kubenswrapper[38936]: I0216 21:25:13.598219 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" podStartSLOduration=3.598193153 podStartE2EDuration="3.598193153s" podCreationTimestamp="2026-02-16 21:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:25:13.591872074 +0000 UTC m=+143.943875436" watchObservedRunningTime="2026-02-16 21:25:13.598193153 +0000 UTC m=+143.950196515" Feb 16 21:25:13.884696 master-0 kubenswrapper[38936]: I0216 21:25:13.884621 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1489d1b6-d8a1-453a-bff3-8adfd4335903" path="/var/lib/kubelet/pods/1489d1b6-d8a1-453a-bff3-8adfd4335903/volumes" Feb 16 21:25:13.885721 master-0 kubenswrapper[38936]: I0216 21:25:13.885240 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="408a9364-3730-4017-b1e4-c85d6a504168" path="/var/lib/kubelet/pods/408a9364-3730-4017-b1e4-c85d6a504168/volumes" Feb 16 21:25:14.583321 master-0 kubenswrapper[38936]: I0216 21:25:14.583231 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" event={"ID":"d8abfc87-0fa3-4b75-8676-50d933c39580","Type":"ContainerStarted","Data":"b14b8b44c52b289458340c9cb89063415b564ac725fa0f7c62e92921a2f5aa1f"} Feb 16 21:25:14.583980 master-0 kubenswrapper[38936]: I0216 21:25:14.583863 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:14.585115 master-0 kubenswrapper[38936]: I0216 21:25:14.585022 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" event={"ID":"21b01cc5-f6a8-4fc3-bb22-c48928697987","Type":"ContainerStarted","Data":"90d739a285349bd9c7c8499170a2591c851fee67d37ff9bddd814d4cd281baab"} Feb 16 21:25:14.589974 master-0 kubenswrapper[38936]: I0216 21:25:14.589915 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" Feb 16 21:25:14.605162 master-0 kubenswrapper[38936]: I0216 21:25:14.604895 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-767b668bb8-vflj5" podStartSLOduration=3.604874514 podStartE2EDuration="3.604874514s" podCreationTimestamp="2026-02-16 21:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:25:14.601225907 +0000 UTC m=+144.953229279" watchObservedRunningTime="2026-02-16 21:25:14.604874514 +0000 UTC m=+144.956877876" Feb 16 21:25:14.646143 master-0 kubenswrapper[38936]: I0216 21:25:14.646028 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" podStartSLOduration=3.646009154 podStartE2EDuration="3.646009154s" podCreationTimestamp="2026-02-16 21:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:25:14.642190212 +0000 UTC m=+144.994193584" watchObservedRunningTime="2026-02-16 21:25:14.646009154 +0000 UTC m=+144.998012516" Feb 16 21:25:15.594522 master-0 kubenswrapper[38936]: I0216 21:25:15.594445 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" event={"ID":"329b8c10-cfb4-49bc-ac25-7b4c724afa31","Type":"ContainerStarted","Data":"86f3fb7ccb9d841aa81425e060fcfd11ec5bf4dbad8f0bca4ab425f996755207"} Feb 16 21:25:15.594522 master-0 kubenswrapper[38936]: I0216 21:25:15.594510 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" event={"ID":"329b8c10-cfb4-49bc-ac25-7b4c724afa31","Type":"ContainerStarted","Data":"557b84500a03cf6079cd49f038a48baa02f478a40354fa8ad0c9990dfda6301c"} Feb 16 21:25:15.599931 master-0 kubenswrapper[38936]: I0216 21:25:15.599836 38936 generic.go:334] "Generic (PLEG): container finished" podID="de36187a-e7bd-445a-ba5e-3fcff71d0175" containerID="1f4153fb09b67b0a6012de54d34e740a453e9387d7233f77fe1b930d258bb8fb" exitCode=0 Feb 16 21:25:15.599931 master-0 kubenswrapper[38936]: I0216 21:25:15.599897 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"de36187a-e7bd-445a-ba5e-3fcff71d0175","Type":"ContainerDied","Data":"1f4153fb09b67b0a6012de54d34e740a453e9387d7233f77fe1b930d258bb8fb"} Feb 16 21:25:15.603852 master-0 kubenswrapper[38936]: I0216 21:25:15.603584 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_fc19ea17c4f595b135412c661d90b9a7/kube-controller-manager/0.log" Feb 16 21:25:15.603852 master-0 kubenswrapper[38936]: I0216 21:25:15.603645 38936 generic.go:334] "Generic (PLEG): container finished" podID="fc19ea17c4f595b135412c661d90b9a7" containerID="6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a" exitCode=137 Feb 16 21:25:15.603852 master-0 kubenswrapper[38936]: I0216 21:25:15.603729 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"fc19ea17c4f595b135412c661d90b9a7","Type":"ContainerDied","Data":"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a"} Feb 16 21:25:15.603852 master-0 kubenswrapper[38936]: I0216 21:25:15.603758 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"fc19ea17c4f595b135412c661d90b9a7","Type":"ContainerStarted","Data":"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460"} Feb 16 21:25:15.606317 master-0 kubenswrapper[38936]: I0216 21:25:15.606266 38936 generic.go:334] "Generic (PLEG): container finished" podID="788d5882-22df-4b55-ae2f-4a92fba7e889" containerID="2a87390e48fa999e5092695e815928e15035072750f48392fe6b423d264cb340" exitCode=0 Feb 16 21:25:15.606474 master-0 kubenswrapper[38936]: I0216 21:25:15.606311 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"788d5882-22df-4b55-ae2f-4a92fba7e889","Type":"ContainerDied","Data":"2a87390e48fa999e5092695e815928e15035072750f48392fe6b423d264cb340"} Feb 16 21:25:15.606697 master-0 kubenswrapper[38936]: I0216 21:25:15.606600 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:15.613268 master-0 kubenswrapper[38936]: I0216 21:25:15.612822 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb" Feb 16 21:25:16.617794 master-0 kubenswrapper[38936]: I0216 21:25:16.617692 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" event={"ID":"329b8c10-cfb4-49bc-ac25-7b4c724afa31","Type":"ContainerStarted","Data":"9b0d1ea55eb36bad42ca4d6bfa18ee0886c38d95e8b71fb6b55a7d7fe6a29e07"} Feb 16 21:25:17.493284 master-0 kubenswrapper[38936]: I0216 21:25:17.493226 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:25:17.493556 master-0 kubenswrapper[38936]: I0216 21:25:17.493290 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:25:17.629930 master-0 kubenswrapper[38936]: I0216 21:25:17.629754 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" event={"ID":"9b54b7c9-1b32-458b-b231-7e64b91a1a93","Type":"ContainerStarted","Data":"3ed3f4fe8587da858e21528390f6d23d76eae8b36171d690e8efdcc747c0194c"} Feb 16 21:25:17.629930 master-0 kubenswrapper[38936]: I0216 21:25:17.629853 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" event={"ID":"9b54b7c9-1b32-458b-b231-7e64b91a1a93","Type":"ContainerStarted","Data":"48322c6a4ec7ae737e88c33bcc3784dafd96ade5eba572bfc8126eb657804934"} Feb 16 21:25:17.629930 master-0 kubenswrapper[38936]: I0216 21:25:17.629872 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" event={"ID":"9b54b7c9-1b32-458b-b231-7e64b91a1a93","Type":"ContainerStarted","Data":"83cd2f658c997325f8df7803eed8439fc4800a587ec3626670e62e56855651e8"} Feb 16 21:25:17.800256 master-0 kubenswrapper[38936]: I0216 21:25:17.800180 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-77f5595c8c-8jsq7" podStartSLOduration=3.338283625 podStartE2EDuration="7.800161354s" podCreationTimestamp="2026-02-16 21:25:10 +0000 UTC" firstStartedPulling="2026-02-16 21:25:12.278082662 +0000 UTC m=+142.630086024" lastFinishedPulling="2026-02-16 21:25:16.739960391 +0000 UTC m=+147.091963753" observedRunningTime="2026-02-16 21:25:17.795111619 +0000 UTC m=+148.147115001" watchObservedRunningTime="2026-02-16 21:25:17.800161354 +0000 UTC m=+148.152164716" Feb 16 21:25:19.184129 master-0 kubenswrapper[38936]: I0216 21:25:19.184049 38936 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 21:25:19.185179 master-0 kubenswrapper[38936]: I0216 21:25:19.185150 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.185881 master-0 kubenswrapper[38936]: I0216 21:25:19.185823 38936 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 21:25:19.187232 master-0 kubenswrapper[38936]: I0216 21:25:19.187194 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-check-endpoints" containerID="cri-o://cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d" gracePeriod=15 Feb 16 21:25:19.187305 master-0 kubenswrapper[38936]: I0216 21:25:19.187290 38936 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 21:25:19.187373 master-0 kubenswrapper[38936]: I0216 21:25:19.187251 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0" gracePeriod=15 Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: I0216 21:25:19.187386 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9" gracePeriod=15 Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: I0216 21:25:19.187465 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c" gracePeriod=15 Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: E0216 21:25:19.187502 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-check-endpoints" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: I0216 21:25:19.187519 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-check-endpoints" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: E0216 21:25:19.187540 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: I0216 21:25:19.187548 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: E0216 21:25:19.187560 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: I0216 21:25:19.187568 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: E0216 21:25:19.187579 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-insecure-readyz" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: I0216 21:25:19.187586 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-insecure-readyz" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: E0216 21:25:19.187606 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-syncer" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: I0216 21:25:19.187614 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-syncer" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: E0216 21:25:19.187641 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-check-endpoints" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: I0216 21:25:19.187607 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver" containerID="cri-o://e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e" gracePeriod=15 Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: I0216 21:25:19.187668 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-check-endpoints" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: E0216 21:25:19.187795 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="setup" Feb 16 21:25:19.188026 master-0 kubenswrapper[38936]: I0216 21:25:19.187829 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="setup" Feb 16 21:25:19.190458 master-0 kubenswrapper[38936]: I0216 21:25:19.188315 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-insecure-readyz" Feb 16 21:25:19.190458 master-0 kubenswrapper[38936]: I0216 21:25:19.188330 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver" Feb 16 21:25:19.190458 master-0 kubenswrapper[38936]: I0216 21:25:19.188357 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 21:25:19.190458 master-0 kubenswrapper[38936]: I0216 21:25:19.188367 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-check-endpoints" Feb 16 21:25:19.190458 master-0 kubenswrapper[38936]: I0216 21:25:19.188422 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-check-endpoints" Feb 16 21:25:19.190458 master-0 kubenswrapper[38936]: I0216 21:25:19.188433 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-syncer" Feb 16 21:25:19.293231 master-0 kubenswrapper[38936]: I0216 21:25:19.293114 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.293489 master-0 kubenswrapper[38936]: I0216 21:25:19.293253 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.293489 master-0 kubenswrapper[38936]: I0216 21:25:19.293408 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:19.293489 master-0 kubenswrapper[38936]: I0216 21:25:19.293428 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.293616 master-0 kubenswrapper[38936]: I0216 21:25:19.293526 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:19.293616 master-0 kubenswrapper[38936]: I0216 21:25:19.293548 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.293616 master-0 kubenswrapper[38936]: I0216 21:25:19.293596 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:19.294006 master-0 kubenswrapper[38936]: I0216 21:25:19.293928 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.395391 master-0 kubenswrapper[38936]: I0216 21:25:19.395247 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.395391 master-0 kubenswrapper[38936]: I0216 21:25:19.395334 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.395391 master-0 kubenswrapper[38936]: I0216 21:25:19.395370 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.395391 master-0 kubenswrapper[38936]: I0216 21:25:19.395337 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395425 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395471 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395493 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395528 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395531 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395572 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395597 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395627 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395672 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395688 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395699 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:19.395863 master-0 kubenswrapper[38936]: I0216 21:25:19.395827 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:19.554782 master-0 kubenswrapper[38936]: I0216 21:25:19.551058 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:25:19.554782 master-0 kubenswrapper[38936]: I0216 21:25:19.554697 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 21:25:19.662835 master-0 kubenswrapper[38936]: I0216 21:25:19.662769 38936 generic.go:334] "Generic (PLEG): container finished" podID="6862f5f5-da61-4347-9a9e-cb47b7e1261f" containerID="effb44c0b670182b0b03c2bef5b66ad309f4287e406f57386e4e7b0fc68ea709" exitCode=0 Feb 16 21:25:19.663039 master-0 kubenswrapper[38936]: I0216 21:25:19.662937 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6862f5f5-da61-4347-9a9e-cb47b7e1261f","Type":"ContainerDied","Data":"effb44c0b670182b0b03c2bef5b66ad309f4287e406f57386e4e7b0fc68ea709"} Feb 16 21:25:19.665572 master-0 kubenswrapper[38936]: I0216 21:25:19.665532 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_e300ec3a145c1339a627607b3c84b99d/kube-apiserver-check-endpoints/0.log" Feb 16 21:25:19.667159 master-0 kubenswrapper[38936]: I0216 21:25:19.667105 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_e300ec3a145c1339a627607b3c84b99d/kube-apiserver-cert-syncer/0.log" Feb 16 21:25:19.667881 master-0 kubenswrapper[38936]: I0216 21:25:19.667829 38936 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d" exitCode=0 Feb 16 21:25:19.667943 master-0 kubenswrapper[38936]: I0216 21:25:19.667880 38936 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0" exitCode=0 Feb 16 21:25:19.667943 master-0 kubenswrapper[38936]: I0216 21:25:19.667896 38936 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9" exitCode=0 Feb 16 21:25:19.667943 master-0 kubenswrapper[38936]: I0216 21:25:19.667911 38936 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c" exitCode=2 Feb 16 21:25:19.667943 master-0 kubenswrapper[38936]: I0216 21:25:19.667926 38936 scope.go:117] "RemoveContainer" containerID="43047bae0f2dd351891e082f8932168325d435e7cb25fa3bae528c469bde358f" Feb 16 21:25:19.671005 master-0 kubenswrapper[38936]: I0216 21:25:19.670945 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" event={"ID":"329b8c10-cfb4-49bc-ac25-7b4c724afa31","Type":"ContainerStarted","Data":"5e2127224c5f59ce9acb2b62bbd437a05f877c4d5b3ed5866eafb8dd300af453"} Feb 16 21:25:19.803365 master-0 kubenswrapper[38936]: W0216 21:25:19.803300 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32286c81635de6de1cf7f328273c1a49.slice/crio-2214a9b066f1b1272301c68355d89039e47f057da4b6af3f8f42723377f03023 WatchSource:0}: Error finding container 2214a9b066f1b1272301c68355d89039e47f057da4b6af3f8f42723377f03023: Status 404 returned error can't find the container with id 2214a9b066f1b1272301c68355d89039e47f057da4b6af3f8f42723377f03023 Feb 16 21:25:20.680137 master-0 kubenswrapper[38936]: I0216 21:25:20.680072 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"788d5882-22df-4b55-ae2f-4a92fba7e889","Type":"ContainerStarted","Data":"bf502a811dfd122b7283c8ee26189e6a0a1346c52cb7ecc735ba7e91046196c1"} Feb 16 21:25:20.680137 master-0 kubenswrapper[38936]: I0216 21:25:20.680130 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"788d5882-22df-4b55-ae2f-4a92fba7e889","Type":"ContainerStarted","Data":"89999d4e17bc48eb9001f365e3ad71b91267d5ef7fa4cc847b930c77c0cbb9fa"} Feb 16 21:25:20.681285 master-0 kubenswrapper[38936]: I0216 21:25:20.681214 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"32286c81635de6de1cf7f328273c1a49","Type":"ContainerStarted","Data":"a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad"} Feb 16 21:25:20.681285 master-0 kubenswrapper[38936]: I0216 21:25:20.681245 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"32286c81635de6de1cf7f328273c1a49","Type":"ContainerStarted","Data":"2214a9b066f1b1272301c68355d89039e47f057da4b6af3f8f42723377f03023"} Feb 16 21:25:20.684213 master-0 kubenswrapper[38936]: I0216 21:25:20.684177 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_e300ec3a145c1339a627607b3c84b99d/kube-apiserver-cert-syncer/0.log" Feb 16 21:25:20.688939 master-0 kubenswrapper[38936]: I0216 21:25:20.688744 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" event={"ID":"329b8c10-cfb4-49bc-ac25-7b4c724afa31","Type":"ContainerStarted","Data":"5c78e22638d965de6c611decd9ec4d2009946fd6a5bdc350c127829efb918edd"} Feb 16 21:25:20.688939 master-0 kubenswrapper[38936]: I0216 21:25:20.688786 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" event={"ID":"329b8c10-cfb4-49bc-ac25-7b4c724afa31","Type":"ContainerStarted","Data":"d63a9965320775b5982919562014e3c599692fc32bdc64ee84cfc47e68424f55"} Feb 16 21:25:20.689073 master-0 kubenswrapper[38936]: I0216 21:25:20.689010 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:20.697257 master-0 kubenswrapper[38936]: I0216 21:25:20.697195 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" Feb 16 21:25:21.869709 master-0 kubenswrapper[38936]: I0216 21:25:21.869085 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:25:21.969375 master-0 kubenswrapper[38936]: I0216 21:25:21.969338 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kube-api-access\") pod \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " Feb 16 21:25:21.970996 master-0 kubenswrapper[38936]: I0216 21:25:21.970944 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kubelet-dir\") pod \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " Feb 16 21:25:21.971083 master-0 kubenswrapper[38936]: I0216 21:25:21.971058 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-var-lock\") pod \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\" (UID: \"6862f5f5-da61-4347-9a9e-cb47b7e1261f\") " Feb 16 21:25:21.971083 master-0 kubenswrapper[38936]: I0216 21:25:21.971044 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6862f5f5-da61-4347-9a9e-cb47b7e1261f" (UID: "6862f5f5-da61-4347-9a9e-cb47b7e1261f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:25:21.971174 master-0 kubenswrapper[38936]: I0216 21:25:21.971165 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-var-lock" (OuterVolumeSpecName: "var-lock") pod "6862f5f5-da61-4347-9a9e-cb47b7e1261f" (UID: "6862f5f5-da61-4347-9a9e-cb47b7e1261f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:25:21.973083 master-0 kubenswrapper[38936]: I0216 21:25:21.973010 38936 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:21.973083 master-0 kubenswrapper[38936]: I0216 21:25:21.973080 38936 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6862f5f5-da61-4347-9a9e-cb47b7e1261f-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:21.973796 master-0 kubenswrapper[38936]: I0216 21:25:21.973774 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6862f5f5-da61-4347-9a9e-cb47b7e1261f" (UID: "6862f5f5-da61-4347-9a9e-cb47b7e1261f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:25:22.075659 master-0 kubenswrapper[38936]: I0216 21:25:22.075586 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6862f5f5-da61-4347-9a9e-cb47b7e1261f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:22.320691 master-0 kubenswrapper[38936]: I0216 21:25:22.320626 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:25:22.320844 master-0 kubenswrapper[38936]: I0216 21:25:22.320705 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:25:22.533219 master-0 kubenswrapper[38936]: I0216 21:25:22.533099 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_e300ec3a145c1339a627607b3c84b99d/kube-apiserver-cert-syncer/0.log" Feb 16 21:25:22.534318 master-0 kubenswrapper[38936]: I0216 21:25:22.534247 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:22.686709 master-0 kubenswrapper[38936]: I0216 21:25:22.684916 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"e300ec3a145c1339a627607b3c84b99d\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " Feb 16 21:25:22.686709 master-0 kubenswrapper[38936]: I0216 21:25:22.684979 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"e300ec3a145c1339a627607b3c84b99d\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " Feb 16 21:25:22.686709 master-0 kubenswrapper[38936]: I0216 21:25:22.685080 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"e300ec3a145c1339a627607b3c84b99d\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " Feb 16 21:25:22.686709 master-0 kubenswrapper[38936]: I0216 21:25:22.685333 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "e300ec3a145c1339a627607b3c84b99d" (UID: "e300ec3a145c1339a627607b3c84b99d"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:25:22.686709 master-0 kubenswrapper[38936]: I0216 21:25:22.685364 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e300ec3a145c1339a627607b3c84b99d" (UID: "e300ec3a145c1339a627607b3c84b99d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:25:22.686709 master-0 kubenswrapper[38936]: I0216 21:25:22.685383 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "e300ec3a145c1339a627607b3c84b99d" (UID: "e300ec3a145c1339a627607b3c84b99d"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:25:22.707692 master-0 kubenswrapper[38936]: I0216 21:25:22.707601 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"de36187a-e7bd-445a-ba5e-3fcff71d0175","Type":"ContainerStarted","Data":"5614b65bac79413a97e788d4fe58752e231b73b15b8a2bdb7221f95f1db95980"} Feb 16 21:25:22.708077 master-0 kubenswrapper[38936]: I0216 21:25:22.707734 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"de36187a-e7bd-445a-ba5e-3fcff71d0175","Type":"ContainerStarted","Data":"fa57aed89278fd743a16a608136983e1deed22fd2b88420b87ee949171f615f0"} Feb 16 21:25:22.709942 master-0 kubenswrapper[38936]: I0216 21:25:22.709888 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6862f5f5-da61-4347-9a9e-cb47b7e1261f","Type":"ContainerDied","Data":"3ebe760bafe9315ab2ea1b58f42b1697bbdfd54e84a1ad8ed4872146b45fb3fa"} Feb 16 21:25:22.709942 master-0 kubenswrapper[38936]: I0216 21:25:22.709934 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ebe760bafe9315ab2ea1b58f42b1697bbdfd54e84a1ad8ed4872146b45fb3fa" Feb 16 21:25:22.709942 master-0 kubenswrapper[38936]: I0216 21:25:22.709904 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 21:25:22.715333 master-0 kubenswrapper[38936]: I0216 21:25:22.715282 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"788d5882-22df-4b55-ae2f-4a92fba7e889","Type":"ContainerStarted","Data":"9bfa7f8d29d41679001e32822a3389b7b2c4a19e9c42562d7d89a3fe028366f6"} Feb 16 21:25:22.715333 master-0 kubenswrapper[38936]: I0216 21:25:22.715334 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"788d5882-22df-4b55-ae2f-4a92fba7e889","Type":"ContainerStarted","Data":"087817340b7a85ece9149d559092313e78c48f49c3ddf8f648c5c0c7e50c1c7d"} Feb 16 21:25:22.715487 master-0 kubenswrapper[38936]: I0216 21:25:22.715345 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"788d5882-22df-4b55-ae2f-4a92fba7e889","Type":"ContainerStarted","Data":"be5c0ed5ff3e46476af5de88f06002fb5881d2668d8d210306d055a391b3d054"} Feb 16 21:25:22.715487 master-0 kubenswrapper[38936]: I0216 21:25:22.715354 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"788d5882-22df-4b55-ae2f-4a92fba7e889","Type":"ContainerStarted","Data":"a48d6cc8fb31b22b146f0b4a89873c8328eb0c9b5246f547afbd41b46a6f9b80"} Feb 16 21:25:22.719296 master-0 kubenswrapper[38936]: I0216 21:25:22.719121 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_e300ec3a145c1339a627607b3c84b99d/kube-apiserver-cert-syncer/0.log" Feb 16 21:25:22.719963 master-0 kubenswrapper[38936]: I0216 21:25:22.719922 38936 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e" exitCode=0 Feb 16 21:25:22.720024 master-0 kubenswrapper[38936]: I0216 21:25:22.719994 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:22.720080 master-0 kubenswrapper[38936]: I0216 21:25:22.720004 38936 scope.go:117] "RemoveContainer" containerID="cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d" Feb 16 21:25:22.744361 master-0 kubenswrapper[38936]: I0216 21:25:22.744307 38936 scope.go:117] "RemoveContainer" containerID="bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0" Feb 16 21:25:22.765982 master-0 kubenswrapper[38936]: I0216 21:25:22.765881 38936 scope.go:117] "RemoveContainer" containerID="fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9" Feb 16 21:25:22.787441 master-0 kubenswrapper[38936]: I0216 21:25:22.787245 38936 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:22.788270 master-0 kubenswrapper[38936]: I0216 21:25:22.788243 38936 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:22.788488 master-0 kubenswrapper[38936]: I0216 21:25:22.788380 38936 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:22.817916 master-0 kubenswrapper[38936]: I0216 21:25:22.815574 38936 scope.go:117] "RemoveContainer" containerID="8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c" Feb 16 21:25:22.851993 master-0 kubenswrapper[38936]: I0216 21:25:22.851942 38936 scope.go:117] "RemoveContainer" containerID="e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e" Feb 16 21:25:22.886053 master-0 kubenswrapper[38936]: I0216 21:25:22.886013 38936 scope.go:117] "RemoveContainer" containerID="8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891" Feb 16 21:25:22.928794 master-0 kubenswrapper[38936]: I0216 21:25:22.928733 38936 scope.go:117] "RemoveContainer" containerID="cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d" Feb 16 21:25:22.929719 master-0 kubenswrapper[38936]: E0216 21:25:22.929621 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d\": container with ID starting with cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d not found: ID does not exist" containerID="cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d" Feb 16 21:25:22.929719 master-0 kubenswrapper[38936]: I0216 21:25:22.929677 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d"} err="failed to get container status \"cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d\": rpc error: code = NotFound desc = could not find container \"cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d\": container with ID starting with cec55103f622a77ab12fa57f750df0c27ed12429c768750f0232ad3fcd0b846d not found: ID does not exist" Feb 16 21:25:22.929719 master-0 kubenswrapper[38936]: I0216 21:25:22.929709 38936 scope.go:117] "RemoveContainer" containerID="bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0" Feb 16 21:25:22.930037 master-0 kubenswrapper[38936]: E0216 21:25:22.930007 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0\": container with ID starting with bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0 not found: ID does not exist" containerID="bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0" Feb 16 21:25:22.930105 master-0 kubenswrapper[38936]: I0216 21:25:22.930029 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0"} err="failed to get container status \"bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0\": rpc error: code = NotFound desc = could not find container \"bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0\": container with ID starting with bfe9ba5fbd345f504666307fee0f4efea9887cea358915d2cd30f77f36401ef0 not found: ID does not exist" Feb 16 21:25:22.930105 master-0 kubenswrapper[38936]: I0216 21:25:22.930070 38936 scope.go:117] "RemoveContainer" containerID="fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9" Feb 16 21:25:22.930366 master-0 kubenswrapper[38936]: E0216 21:25:22.930332 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9\": container with ID starting with fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9 not found: ID does not exist" containerID="fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9" Feb 16 21:25:22.930366 master-0 kubenswrapper[38936]: I0216 21:25:22.930357 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9"} err="failed to get container status \"fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9\": rpc error: code = NotFound desc = could not find container \"fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9\": container with ID starting with fa4ce6271b82f17286a47605f4c5e94255ab02a39e6bf3a19833f194eb3c8cf9 not found: ID does not exist" Feb 16 21:25:22.930486 master-0 kubenswrapper[38936]: I0216 21:25:22.930405 38936 scope.go:117] "RemoveContainer" containerID="8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c" Feb 16 21:25:22.930740 master-0 kubenswrapper[38936]: E0216 21:25:22.930715 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c\": container with ID starting with 8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c not found: ID does not exist" containerID="8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c" Feb 16 21:25:22.930812 master-0 kubenswrapper[38936]: I0216 21:25:22.930757 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c"} err="failed to get container status \"8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c\": rpc error: code = NotFound desc = could not find container \"8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c\": container with ID starting with 8b155d07f9276ca9dee1a2c069bd169ef79dcdd4f2443697c8d7415636c8e58c not found: ID does not exist" Feb 16 21:25:22.930812 master-0 kubenswrapper[38936]: I0216 21:25:22.930774 38936 scope.go:117] "RemoveContainer" containerID="e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e" Feb 16 21:25:22.931167 master-0 kubenswrapper[38936]: E0216 21:25:22.931139 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e\": container with ID starting with e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e not found: ID does not exist" containerID="e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e" Feb 16 21:25:22.931288 master-0 kubenswrapper[38936]: I0216 21:25:22.931264 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e"} err="failed to get container status \"e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e\": rpc error: code = NotFound desc = could not find container \"e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e\": container with ID starting with e606b2dabd52c10f2beae5590e83886f4cb1a2570803dbd7c5fe0c5d33fc926e not found: ID does not exist" Feb 16 21:25:22.931369 master-0 kubenswrapper[38936]: I0216 21:25:22.931355 38936 scope.go:117] "RemoveContainer" containerID="8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891" Feb 16 21:25:22.931965 master-0 kubenswrapper[38936]: E0216 21:25:22.931941 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891\": container with ID starting with 8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891 not found: ID does not exist" containerID="8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891" Feb 16 21:25:22.932033 master-0 kubenswrapper[38936]: I0216 21:25:22.931962 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891"} err="failed to get container status \"8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891\": rpc error: code = NotFound desc = could not find container \"8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891\": container with ID starting with 8a83fac7d6d5ae1a1f48df3b9f649957515ab488499c5a4e72d3372e82e2e891 not found: ID does not exist" Feb 16 21:25:23.018425 master-0 kubenswrapper[38936]: E0216 21:25:23.018363 38936 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:23.019378 master-0 kubenswrapper[38936]: E0216 21:25:23.019252 38936 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:23.020490 master-0 kubenswrapper[38936]: E0216 21:25:23.020443 38936 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:23.021041 master-0 kubenswrapper[38936]: E0216 21:25:23.020965 38936 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:23.022711 master-0 kubenswrapper[38936]: E0216 21:25:23.022570 38936 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:23.023078 master-0 kubenswrapper[38936]: I0216 21:25:23.022608 38936 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 21:25:23.023757 master-0 kubenswrapper[38936]: E0216 21:25:23.023718 38936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 21:25:23.225194 master-0 kubenswrapper[38936]: E0216 21:25:23.225150 38936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 21:25:23.626760 master-0 kubenswrapper[38936]: E0216 21:25:23.626687 38936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 21:25:23.742886 master-0 kubenswrapper[38936]: I0216 21:25:23.742816 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"de36187a-e7bd-445a-ba5e-3fcff71d0175","Type":"ContainerStarted","Data":"ea5c120d02bc210692c134a444fc147c78171521c91ed58bc78a3409ec6b5fb5"} Feb 16 21:25:23.743199 master-0 kubenswrapper[38936]: I0216 21:25:23.742887 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"de36187a-e7bd-445a-ba5e-3fcff71d0175","Type":"ContainerStarted","Data":"034760b4993cc337193aefc43470ff58a17b0639062dda6c752dd27aed53e80f"} Feb 16 21:25:23.743199 master-0 kubenswrapper[38936]: I0216 21:25:23.742917 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"de36187a-e7bd-445a-ba5e-3fcff71d0175","Type":"ContainerStarted","Data":"2b40dc745f1d795a3809b9622ffb95bbdcee7c58cf8ef0e2fad17babeaa3ff9f"} Feb 16 21:25:23.743199 master-0 kubenswrapper[38936]: I0216 21:25:23.742934 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"de36187a-e7bd-445a-ba5e-3fcff71d0175","Type":"ContainerStarted","Data":"75703d725d7b385bd68c1e700e2f434e5e29281608078b7ef85d24eda28a7dc2"} Feb 16 21:25:23.889317 master-0 kubenswrapper[38936]: I0216 21:25:23.889140 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e300ec3a145c1339a627607b3c84b99d" path="/var/lib/kubelet/pods/e300ec3a145c1339a627607b3c84b99d/volumes" Feb 16 21:25:24.428973 master-0 kubenswrapper[38936]: E0216 21:25:24.428885 38936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 21:25:24.499207 master-0 kubenswrapper[38936]: I0216 21:25:24.498974 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:25:24.499207 master-0 kubenswrapper[38936]: I0216 21:25:24.499047 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:25:24.506069 master-0 kubenswrapper[38936]: I0216 21:25:24.506009 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:25:24.685645 master-0 kubenswrapper[38936]: E0216 21:25:24.685365 38936 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.1894d727bff3cbf5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:e300ec3a145c1339a627607b3c84b99d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Killing,Message:Stopping container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 21:25:19.187569653 +0000 UTC m=+149.539573025,LastTimestamp:2026-02-16 21:25:19.187569653 +0000 UTC m=+149.539573025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:25:24.698376 master-0 kubenswrapper[38936]: I0216 21:25:24.698187 38936 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a42f90b6-8fc7-43bf-ae7b-a8eea1c68c0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:25:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:25:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:25:19Z\\\",\\\"message\\\":\\\"containers with unready status: [startup-monitor]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:25:19Z\\\",\\\"message\\\":\\\"containers with unready status: [startup-monitor]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T21:25:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"startup-monitor\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"manifests\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources/secrets\\\",\\\"name\\\":\\\"pod-resource-dir\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources/configmaps\\\",\\\"name\\\":\\\"pod-resource-dir\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lock\\\",\\\"name\\\":\\\"var-lock\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"var-log\\\"}]}],\\\"hostIP\\\":\\\"192.168.32.10\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.32.10\\\"}],\\\"podIP\\\":\\\"192.168.32.10\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.32.10\\\"}],\\\"startTime\\\":\\\"2026-02-16T21:25:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-startup-monitor-master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0/status\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.698940 master-0 kubenswrapper[38936]: I0216 21:25:24.698881 38936 status_manager.go:851] "Failed to get status for pod" podUID="6862f5f5-da61-4347-9a9e-cb47b7e1261f" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.699429 master-0 kubenswrapper[38936]: I0216 21:25:24.699372 38936 status_manager.go:851] "Failed to get status for pod" podUID="e300ec3a145c1339a627607b3c84b99d" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.700151 master-0 kubenswrapper[38936]: I0216 21:25:24.700112 38936 status_manager.go:851] "Failed to get status for pod" podUID="329b8c10-cfb4-49bc-ac25-7b4c724afa31" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-f886f46f4-gz92q\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.700883 master-0 kubenswrapper[38936]: I0216 21:25:24.700833 38936 status_manager.go:851] "Failed to get status for pod" podUID="de36187a-e7bd-445a-ba5e-3fcff71d0175" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.701686 master-0 kubenswrapper[38936]: I0216 21:25:24.701579 38936 status_manager.go:851] "Failed to get status for pod" podUID="fc19ea17c4f595b135412c661d90b9a7" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.702233 master-0 kubenswrapper[38936]: I0216 21:25:24.702188 38936 status_manager.go:851] "Failed to get status for pod" podUID="788d5882-22df-4b55-ae2f-4a92fba7e889" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.702825 master-0 kubenswrapper[38936]: I0216 21:25:24.702776 38936 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.703367 master-0 kubenswrapper[38936]: I0216 21:25:24.703320 38936 status_manager.go:851] "Failed to get status for pod" podUID="6862f5f5-da61-4347-9a9e-cb47b7e1261f" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.710457 master-0 kubenswrapper[38936]: I0216 21:25:24.710388 38936 status_manager.go:851] "Failed to get status for pod" podUID="de36187a-e7bd-445a-ba5e-3fcff71d0175" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.711324 master-0 kubenswrapper[38936]: I0216 21:25:24.711268 38936 status_manager.go:851] "Failed to get status for pod" podUID="fc19ea17c4f595b135412c661d90b9a7" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.712099 master-0 kubenswrapper[38936]: I0216 21:25:24.712039 38936 status_manager.go:851] "Failed to get status for pod" podUID="788d5882-22df-4b55-ae2f-4a92fba7e889" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.712670 master-0 kubenswrapper[38936]: I0216 21:25:24.712600 38936 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.713112 master-0 kubenswrapper[38936]: I0216 21:25:24.713067 38936 status_manager.go:851] "Failed to get status for pod" podUID="6862f5f5-da61-4347-9a9e-cb47b7e1261f" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:24.713967 master-0 kubenswrapper[38936]: I0216 21:25:24.713899 38936 status_manager.go:851] "Failed to get status for pod" podUID="329b8c10-cfb4-49bc-ac25-7b4c724afa31" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-f886f46f4-gz92q\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:25.602342 master-0 kubenswrapper[38936]: E0216 21:25:25.602172 38936 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.1894d727bff3cbf5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:e300ec3a145c1339a627607b3c84b99d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Killing,Message:Stopping container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 21:25:19.187569653 +0000 UTC m=+149.539573025,LastTimestamp:2026-02-16 21:25:19.187569653 +0000 UTC m=+149.539573025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 21:25:25.946923 master-0 kubenswrapper[38936]: I0216 21:25:25.946774 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 21:25:25.948440 master-0 kubenswrapper[38936]: I0216 21:25:25.948355 38936 status_manager.go:851] "Failed to get status for pod" podUID="952766c3a88fd12345a552f1277199f9" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:25.949267 master-0 kubenswrapper[38936]: I0216 21:25:25.949186 38936 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:25.949838 master-0 kubenswrapper[38936]: I0216 21:25:25.949778 38936 status_manager.go:851] "Failed to get status for pod" podUID="6862f5f5-da61-4347-9a9e-cb47b7e1261f" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:25.950458 master-0 kubenswrapper[38936]: I0216 21:25:25.950402 38936 status_manager.go:851] "Failed to get status for pod" podUID="329b8c10-cfb4-49bc-ac25-7b4c724afa31" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-f886f46f4-gz92q\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:25.951096 master-0 kubenswrapper[38936]: I0216 21:25:25.951039 38936 status_manager.go:851] "Failed to get status for pod" podUID="de36187a-e7bd-445a-ba5e-3fcff71d0175" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:25.951837 master-0 kubenswrapper[38936]: I0216 21:25:25.951747 38936 status_manager.go:851] "Failed to get status for pod" podUID="fc19ea17c4f595b135412c661d90b9a7" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:25.952577 master-0 kubenswrapper[38936]: I0216 21:25:25.952521 38936 status_manager.go:851] "Failed to get status for pod" podUID="788d5882-22df-4b55-ae2f-4a92fba7e889" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:26.030050 master-0 kubenswrapper[38936]: E0216 21:25:26.029987 38936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 21:25:26.398416 master-0 kubenswrapper[38936]: I0216 21:25:26.398335 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:25:27.493908 master-0 kubenswrapper[38936]: I0216 21:25:27.493834 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:25:27.494895 master-0 kubenswrapper[38936]: I0216 21:25:27.494573 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:25:29.231963 master-0 kubenswrapper[38936]: E0216 21:25:29.231877 38936 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 16 21:25:29.878020 master-0 kubenswrapper[38936]: I0216 21:25:29.877906 38936 status_manager.go:851] "Failed to get status for pod" podUID="952766c3a88fd12345a552f1277199f9" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:29.878627 master-0 kubenswrapper[38936]: I0216 21:25:29.878553 38936 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:29.879244 master-0 kubenswrapper[38936]: I0216 21:25:29.879192 38936 status_manager.go:851] "Failed to get status for pod" podUID="6862f5f5-da61-4347-9a9e-cb47b7e1261f" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:29.879727 master-0 kubenswrapper[38936]: I0216 21:25:29.879689 38936 status_manager.go:851] "Failed to get status for pod" podUID="329b8c10-cfb4-49bc-ac25-7b4c724afa31" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-f886f46f4-gz92q\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:29.880195 master-0 kubenswrapper[38936]: I0216 21:25:29.880120 38936 status_manager.go:851] "Failed to get status for pod" podUID="de36187a-e7bd-445a-ba5e-3fcff71d0175" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:29.880615 master-0 kubenswrapper[38936]: I0216 21:25:29.880570 38936 status_manager.go:851] "Failed to get status for pod" podUID="fc19ea17c4f595b135412c661d90b9a7" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:29.881011 master-0 kubenswrapper[38936]: I0216 21:25:29.880971 38936 status_manager.go:851] "Failed to get status for pod" podUID="788d5882-22df-4b55-ae2f-4a92fba7e889" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:31.319122 master-0 kubenswrapper[38936]: I0216 21:25:31.319034 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:31.319122 master-0 kubenswrapper[38936]: I0216 21:25:31.319084 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:32.321394 master-0 kubenswrapper[38936]: I0216 21:25:32.321323 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:25:32.321920 master-0 kubenswrapper[38936]: I0216 21:25:32.321394 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:25:32.874435 master-0 kubenswrapper[38936]: I0216 21:25:32.874231 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:32.876043 master-0 kubenswrapper[38936]: I0216 21:25:32.875939 38936 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:32.877094 master-0 kubenswrapper[38936]: I0216 21:25:32.876998 38936 status_manager.go:851] "Failed to get status for pod" podUID="6862f5f5-da61-4347-9a9e-cb47b7e1261f" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:32.878211 master-0 kubenswrapper[38936]: I0216 21:25:32.878118 38936 status_manager.go:851] "Failed to get status for pod" podUID="329b8c10-cfb4-49bc-ac25-7b4c724afa31" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-f886f46f4-gz92q\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:32.879225 master-0 kubenswrapper[38936]: I0216 21:25:32.879142 38936 status_manager.go:851] "Failed to get status for pod" podUID="de36187a-e7bd-445a-ba5e-3fcff71d0175" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:32.880199 master-0 kubenswrapper[38936]: I0216 21:25:32.880119 38936 status_manager.go:851] "Failed to get status for pod" podUID="fc19ea17c4f595b135412c661d90b9a7" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:32.881083 master-0 kubenswrapper[38936]: I0216 21:25:32.881029 38936 status_manager.go:851] "Failed to get status for pod" podUID="788d5882-22df-4b55-ae2f-4a92fba7e889" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:32.882048 master-0 kubenswrapper[38936]: I0216 21:25:32.881960 38936 status_manager.go:851] "Failed to get status for pod" podUID="952766c3a88fd12345a552f1277199f9" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:32.900318 master-0 kubenswrapper[38936]: I0216 21:25:32.900223 38936 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9ea6ac45-4501-4996-a4c8-5d666dfbe587" Feb 16 21:25:32.900318 master-0 kubenswrapper[38936]: I0216 21:25:32.900290 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9ea6ac45-4501-4996-a4c8-5d666dfbe587" Feb 16 21:25:32.901429 master-0 kubenswrapper[38936]: E0216 21:25:32.901362 38936 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:32.902404 master-0 kubenswrapper[38936]: I0216 21:25:32.902349 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:33.829258 master-0 kubenswrapper[38936]: I0216 21:25:33.829078 38936 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="156de39e5ba24b07b817f20277d86ee3530f7a70260dab1c5fec85f3c7d1bed4" exitCode=0 Feb 16 21:25:33.829258 master-0 kubenswrapper[38936]: I0216 21:25:33.829131 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerDied","Data":"156de39e5ba24b07b817f20277d86ee3530f7a70260dab1c5fec85f3c7d1bed4"} Feb 16 21:25:33.829258 master-0 kubenswrapper[38936]: I0216 21:25:33.829160 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"6faf417149a82504ba4e0f2d61e564efaa072409bc46d69a6a9a706ca1fc3955"} Feb 16 21:25:33.830263 master-0 kubenswrapper[38936]: I0216 21:25:33.829410 38936 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9ea6ac45-4501-4996-a4c8-5d666dfbe587" Feb 16 21:25:33.830263 master-0 kubenswrapper[38936]: I0216 21:25:33.829428 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9ea6ac45-4501-4996-a4c8-5d666dfbe587" Feb 16 21:25:33.830263 master-0 kubenswrapper[38936]: I0216 21:25:33.830248 38936 status_manager.go:851] "Failed to get status for pod" podUID="952766c3a88fd12345a552f1277199f9" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:33.830478 master-0 kubenswrapper[38936]: E0216 21:25:33.830334 38936 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:33.831073 master-0 kubenswrapper[38936]: I0216 21:25:33.831024 38936 status_manager.go:851] "Failed to get status for pod" podUID="32286c81635de6de1cf7f328273c1a49" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:33.831819 master-0 kubenswrapper[38936]: I0216 21:25:33.831632 38936 status_manager.go:851] "Failed to get status for pod" podUID="6862f5f5-da61-4347-9a9e-cb47b7e1261f" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:33.832560 master-0 kubenswrapper[38936]: I0216 21:25:33.832473 38936 status_manager.go:851] "Failed to get status for pod" podUID="329b8c10-cfb4-49bc-ac25-7b4c724afa31" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-f886f46f4-gz92q\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:33.833403 master-0 kubenswrapper[38936]: I0216 21:25:33.833315 38936 status_manager.go:851] "Failed to get status for pod" podUID="de36187a-e7bd-445a-ba5e-3fcff71d0175" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:33.834397 master-0 kubenswrapper[38936]: I0216 21:25:33.834135 38936 status_manager.go:851] "Failed to get status for pod" podUID="fc19ea17c4f595b135412c661d90b9a7" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:33.834912 master-0 kubenswrapper[38936]: I0216 21:25:33.834834 38936 status_manager.go:851] "Failed to get status for pod" podUID="788d5882-22df-4b55-ae2f-4a92fba7e889" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 21:25:34.504107 master-0 kubenswrapper[38936]: I0216 21:25:34.504049 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:25:34.840079 master-0 kubenswrapper[38936]: I0216 21:25:34.840033 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"cda351814f52243a832331f401b0f1aeecde08a953e2bd349da6c32db0b7306a"} Feb 16 21:25:34.840079 master-0 kubenswrapper[38936]: I0216 21:25:34.840080 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"842ddcbbe13b838e58efd56154c7fef51db9a69a49a52f767b1f4f3989d2d8bf"} Feb 16 21:25:34.841052 master-0 kubenswrapper[38936]: I0216 21:25:34.840090 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"bfe4d5615ed8a0f4a1feb9997411083a59b4ec13086988dd35f8986047be78cd"} Feb 16 21:25:35.848634 master-0 kubenswrapper[38936]: I0216 21:25:35.848586 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"659890abdd78396f1f488efc0d7b0f7f1b90f79a34922fd733f03fbfdc00eb15"} Feb 16 21:25:35.849238 master-0 kubenswrapper[38936]: I0216 21:25:35.849216 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"3182e11ae122df5bcd6aa51a7416736c32626313a86083526f5dbd9e3d798417"} Feb 16 21:25:35.849340 master-0 kubenswrapper[38936]: I0216 21:25:35.849325 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:35.849433 master-0 kubenswrapper[38936]: I0216 21:25:35.848821 38936 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9ea6ac45-4501-4996-a4c8-5d666dfbe587" Feb 16 21:25:35.849495 master-0 kubenswrapper[38936]: I0216 21:25:35.849447 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9ea6ac45-4501-4996-a4c8-5d666dfbe587" Feb 16 21:25:36.148793 master-0 kubenswrapper[38936]: I0216 21:25:36.148640 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" podUID="46f9f317-a78e-4d18-b1c1-882631cfc6eb" containerName="oauth-openshift" containerID="cri-o://e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642" gracePeriod=15 Feb 16 21:25:36.798884 master-0 kubenswrapper[38936]: I0216 21:25:36.798836 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:25:36.862157 master-0 kubenswrapper[38936]: I0216 21:25:36.862055 38936 generic.go:334] "Generic (PLEG): container finished" podID="46f9f317-a78e-4d18-b1c1-882631cfc6eb" containerID="e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642" exitCode=0 Feb 16 21:25:36.862991 master-0 kubenswrapper[38936]: I0216 21:25:36.862146 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" event={"ID":"46f9f317-a78e-4d18-b1c1-882631cfc6eb","Type":"ContainerDied","Data":"e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642"} Feb 16 21:25:36.862991 master-0 kubenswrapper[38936]: I0216 21:25:36.862210 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" event={"ID":"46f9f317-a78e-4d18-b1c1-882631cfc6eb","Type":"ContainerDied","Data":"45b3be3fe305d9669dd5870c24af6ff6fd509c22355554d98f17b1456597bad2"} Feb 16 21:25:36.862991 master-0 kubenswrapper[38936]: I0216 21:25:36.862230 38936 scope.go:117] "RemoveContainer" containerID="e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642" Feb 16 21:25:36.862991 master-0 kubenswrapper[38936]: I0216 21:25:36.862364 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5c88849d7d-xfnmp" Feb 16 21:25:36.892481 master-0 kubenswrapper[38936]: I0216 21:25:36.892425 38936 scope.go:117] "RemoveContainer" containerID="e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642" Feb 16 21:25:36.892816 master-0 kubenswrapper[38936]: E0216 21:25:36.892784 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642\": container with ID starting with e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642 not found: ID does not exist" containerID="e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642" Feb 16 21:25:36.892878 master-0 kubenswrapper[38936]: I0216 21:25:36.892822 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642"} err="failed to get container status \"e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642\": rpc error: code = NotFound desc = could not find container \"e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642\": container with ID starting with e18816755558e6495af87791dac2fcd00a9c915b58f12fb7787b0658f8e2f642 not found: ID does not exist" Feb 16 21:25:36.913315 master-0 kubenswrapper[38936]: I0216 21:25:36.913285 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-router-certs\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.913446 master-0 kubenswrapper[38936]: I0216 21:25:36.913432 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtkxj\" (UniqueName: \"kubernetes.io/projected/46f9f317-a78e-4d18-b1c1-882631cfc6eb-kube-api-access-jtkxj\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.913537 master-0 kubenswrapper[38936]: I0216 21:25:36.913525 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-policies\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.913629 master-0 kubenswrapper[38936]: I0216 21:25:36.913614 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-trusted-ca-bundle\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.913792 master-0 kubenswrapper[38936]: I0216 21:25:36.913777 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-login\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.913910 master-0 kubenswrapper[38936]: I0216 21:25:36.913896 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-session\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.914022 master-0 kubenswrapper[38936]: I0216 21:25:36.914008 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-provider-selection\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.914137 master-0 kubenswrapper[38936]: I0216 21:25:36.914121 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-cliconfig\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.914267 master-0 kubenswrapper[38936]: I0216 21:25:36.914249 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-error\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.914412 master-0 kubenswrapper[38936]: I0216 21:25:36.914393 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-ocp-branding-template\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.914529 master-0 kubenswrapper[38936]: I0216 21:25:36.914511 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-service-ca\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.914637 master-0 kubenswrapper[38936]: I0216 21:25:36.914620 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-serving-cert\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.914784 master-0 kubenswrapper[38936]: I0216 21:25:36.914170 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:25:36.914885 master-0 kubenswrapper[38936]: I0216 21:25:36.914201 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:25:36.914966 master-0 kubenswrapper[38936]: I0216 21:25:36.914574 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:25:36.915049 master-0 kubenswrapper[38936]: I0216 21:25:36.915008 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-dir\") pod \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\" (UID: \"46f9f317-a78e-4d18-b1c1-882631cfc6eb\") " Feb 16 21:25:36.915234 master-0 kubenswrapper[38936]: I0216 21:25:36.915043 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:25:36.915319 master-0 kubenswrapper[38936]: I0216 21:25:36.915272 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:25:36.915993 master-0 kubenswrapper[38936]: I0216 21:25:36.915975 38936 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:36.916129 master-0 kubenswrapper[38936]: I0216 21:25:36.916114 38936 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:36.916359 master-0 kubenswrapper[38936]: I0216 21:25:36.916323 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:36.916473 master-0 kubenswrapper[38936]: I0216 21:25:36.916447 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:36.916560 master-0 kubenswrapper[38936]: I0216 21:25:36.916546 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:36.916896 master-0 kubenswrapper[38936]: I0216 21:25:36.916852 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:25:36.917106 master-0 kubenswrapper[38936]: I0216 21:25:36.917075 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:25:36.917360 master-0 kubenswrapper[38936]: I0216 21:25:36.917320 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:25:36.917577 master-0 kubenswrapper[38936]: I0216 21:25:36.917545 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:25:36.917687 master-0 kubenswrapper[38936]: I0216 21:25:36.917604 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:25:36.918069 master-0 kubenswrapper[38936]: I0216 21:25:36.917994 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:25:36.918574 master-0 kubenswrapper[38936]: I0216 21:25:36.918532 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:25:36.924293 master-0 kubenswrapper[38936]: I0216 21:25:36.924255 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f9f317-a78e-4d18-b1c1-882631cfc6eb-kube-api-access-jtkxj" (OuterVolumeSpecName: "kube-api-access-jtkxj") pod "46f9f317-a78e-4d18-b1c1-882631cfc6eb" (UID: "46f9f317-a78e-4d18-b1c1-882631cfc6eb"). InnerVolumeSpecName "kube-api-access-jtkxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:25:37.018359 master-0 kubenswrapper[38936]: I0216 21:25:37.018210 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:37.018359 master-0 kubenswrapper[38936]: I0216 21:25:37.018250 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:37.018359 master-0 kubenswrapper[38936]: I0216 21:25:37.018264 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:37.018359 master-0 kubenswrapper[38936]: I0216 21:25:37.018275 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:37.018359 master-0 kubenswrapper[38936]: I0216 21:25:37.018287 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtkxj\" (UniqueName: \"kubernetes.io/projected/46f9f317-a78e-4d18-b1c1-882631cfc6eb-kube-api-access-jtkxj\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:37.018359 master-0 kubenswrapper[38936]: I0216 21:25:37.018297 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:37.018359 master-0 kubenswrapper[38936]: I0216 21:25:37.018325 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:37.018359 master-0 kubenswrapper[38936]: I0216 21:25:37.018335 38936 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/46f9f317-a78e-4d18-b1c1-882631cfc6eb-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 16 21:25:37.493636 master-0 kubenswrapper[38936]: I0216 21:25:37.493503 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:25:37.493636 master-0 kubenswrapper[38936]: I0216 21:25:37.493572 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:25:37.903306 master-0 kubenswrapper[38936]: I0216 21:25:37.903229 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:37.903306 master-0 kubenswrapper[38936]: I0216 21:25:37.903296 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:37.909506 master-0 kubenswrapper[38936]: I0216 21:25:37.909427 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:40.866298 master-0 kubenswrapper[38936]: I0216 21:25:40.866243 38936 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:40.898430 master-0 kubenswrapper[38936]: I0216 21:25:40.898129 38936 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9ea6ac45-4501-4996-a4c8-5d666dfbe587" Feb 16 21:25:40.898430 master-0 kubenswrapper[38936]: I0216 21:25:40.898175 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9ea6ac45-4501-4996-a4c8-5d666dfbe587" Feb 16 21:25:40.901966 master-0 kubenswrapper[38936]: I0216 21:25:40.901927 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:40.905487 master-0 kubenswrapper[38936]: I0216 21:25:40.905432 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="10e298020284b0e8ffa6a0bc184059d9" podUID="02d4450b-4c97-4375-a853-ceb34700199d" Feb 16 21:25:41.067479 master-0 kubenswrapper[38936]: E0216 21:25:41.065237 38936 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Feb 16 21:25:41.905224 master-0 kubenswrapper[38936]: I0216 21:25:41.905169 38936 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9ea6ac45-4501-4996-a4c8-5d666dfbe587" Feb 16 21:25:41.905224 master-0 kubenswrapper[38936]: I0216 21:25:41.905213 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9ea6ac45-4501-4996-a4c8-5d666dfbe587" Feb 16 21:25:42.320835 master-0 kubenswrapper[38936]: I0216 21:25:42.320743 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:25:42.320835 master-0 kubenswrapper[38936]: I0216 21:25:42.320820 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:25:47.493332 master-0 kubenswrapper[38936]: I0216 21:25:47.493246 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:25:47.494036 master-0 kubenswrapper[38936]: I0216 21:25:47.493338 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:25:49.908504 master-0 kubenswrapper[38936]: I0216 21:25:49.908386 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="10e298020284b0e8ffa6a0bc184059d9" podUID="02d4450b-4c97-4375-a853-ceb34700199d" Feb 16 21:25:50.001772 master-0 kubenswrapper[38936]: I0216 21:25:50.001689 38936 scope.go:117] "RemoveContainer" containerID="6ae1597534c852a1aae5585dadba4c16b6d817d6984c35ca98940b0dfe1fcd77" Feb 16 21:25:50.156670 master-0 kubenswrapper[38936]: I0216 21:25:50.156584 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 21:25:50.221045 master-0 kubenswrapper[38936]: I0216 21:25:50.220930 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 21:25:50.539081 master-0 kubenswrapper[38936]: I0216 21:25:50.539018 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 21:25:51.024011 master-0 kubenswrapper[38936]: I0216 21:25:51.023879 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-rtvdz" Feb 16 21:25:51.164899 master-0 kubenswrapper[38936]: I0216 21:25:51.164832 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 21:25:51.328863 master-0 kubenswrapper[38936]: I0216 21:25:51.328682 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:51.334614 master-0 kubenswrapper[38936]: I0216 21:25:51.334548 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-57ddf7d868-wm6cg" Feb 16 21:25:51.417828 master-0 kubenswrapper[38936]: I0216 21:25:51.417772 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 21:25:51.488293 master-0 kubenswrapper[38936]: I0216 21:25:51.488228 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 21:25:51.609197 master-0 kubenswrapper[38936]: I0216 21:25:51.609074 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 21:25:51.629074 master-0 kubenswrapper[38936]: I0216 21:25:51.629011 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 21:25:51.713783 master-0 kubenswrapper[38936]: I0216 21:25:51.713665 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-f8prb" Feb 16 21:25:51.832268 master-0 kubenswrapper[38936]: I0216 21:25:51.832211 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 21:25:51.944155 master-0 kubenswrapper[38936]: I0216 21:25:51.944028 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 21:25:52.049853 master-0 kubenswrapper[38936]: I0216 21:25:52.049785 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 21:25:52.183457 master-0 kubenswrapper[38936]: I0216 21:25:52.183404 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 21:25:52.203321 master-0 kubenswrapper[38936]: I0216 21:25:52.203191 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-457l2" Feb 16 21:25:52.221604 master-0 kubenswrapper[38936]: I0216 21:25:52.221520 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:25:52.244316 master-0 kubenswrapper[38936]: I0216 21:25:52.244252 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 21:25:52.321222 master-0 kubenswrapper[38936]: I0216 21:25:52.321149 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:25:52.321561 master-0 kubenswrapper[38936]: I0216 21:25:52.321250 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:25:52.448111 master-0 kubenswrapper[38936]: I0216 21:25:52.448021 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 21:25:52.524429 master-0 kubenswrapper[38936]: I0216 21:25:52.524362 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 21:25:52.556420 master-0 kubenswrapper[38936]: I0216 21:25:52.556359 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 21:25:52.643266 master-0 kubenswrapper[38936]: I0216 21:25:52.643204 38936 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 21:25:52.745348 master-0 kubenswrapper[38936]: I0216 21:25:52.745268 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 21:25:52.776553 master-0 kubenswrapper[38936]: I0216 21:25:52.776390 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-2t7md" Feb 16 21:25:52.810895 master-0 kubenswrapper[38936]: I0216 21:25:52.810839 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 21:25:52.849481 master-0 kubenswrapper[38936]: I0216 21:25:52.849421 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-xg8bz" Feb 16 21:25:52.888715 master-0 kubenswrapper[38936]: I0216 21:25:52.888669 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7m8u98371q9c9" Feb 16 21:25:52.952951 master-0 kubenswrapper[38936]: I0216 21:25:52.952891 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 21:25:52.966951 master-0 kubenswrapper[38936]: I0216 21:25:52.966912 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-vvh6n" Feb 16 21:25:53.168830 master-0 kubenswrapper[38936]: I0216 21:25:53.168699 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-sg7xc" Feb 16 21:25:53.298402 master-0 kubenswrapper[38936]: I0216 21:25:53.298343 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 21:25:53.403175 master-0 kubenswrapper[38936]: I0216 21:25:53.403109 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 21:25:53.420878 master-0 kubenswrapper[38936]: I0216 21:25:53.420797 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 21:25:53.481346 master-0 kubenswrapper[38936]: I0216 21:25:53.481272 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 21:25:53.571611 master-0 kubenswrapper[38936]: I0216 21:25:53.571550 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 21:25:53.660771 master-0 kubenswrapper[38936]: I0216 21:25:53.660631 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 16 21:25:53.876917 master-0 kubenswrapper[38936]: I0216 21:25:53.876592 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 21:25:53.953581 master-0 kubenswrapper[38936]: I0216 21:25:53.948906 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-nztqm" Feb 16 21:25:53.961708 master-0 kubenswrapper[38936]: I0216 21:25:53.961613 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 21:25:54.036334 master-0 kubenswrapper[38936]: I0216 21:25:54.036269 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-6thqgv1l637aa" Feb 16 21:25:54.153167 master-0 kubenswrapper[38936]: I0216 21:25:54.153040 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 21:25:54.166725 master-0 kubenswrapper[38936]: I0216 21:25:54.166685 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 21:25:54.183579 master-0 kubenswrapper[38936]: I0216 21:25:54.183522 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 21:25:54.364240 master-0 kubenswrapper[38936]: I0216 21:25:54.364168 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 21:25:54.426208 master-0 kubenswrapper[38936]: I0216 21:25:54.423479 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 21:25:54.572084 master-0 kubenswrapper[38936]: I0216 21:25:54.572023 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 21:25:54.575111 master-0 kubenswrapper[38936]: I0216 21:25:54.575073 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 21:25:54.716630 master-0 kubenswrapper[38936]: I0216 21:25:54.716516 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 21:25:54.790407 master-0 kubenswrapper[38936]: I0216 21:25:54.790328 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 21:25:54.871118 master-0 kubenswrapper[38936]: I0216 21:25:54.871037 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 21:25:54.936250 master-0 kubenswrapper[38936]: I0216 21:25:54.936166 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 21:25:55.093827 master-0 kubenswrapper[38936]: I0216 21:25:55.093484 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 21:25:55.127473 master-0 kubenswrapper[38936]: I0216 21:25:55.127394 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 21:25:55.189335 master-0 kubenswrapper[38936]: I0216 21:25:55.189256 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 21:25:55.290204 master-0 kubenswrapper[38936]: I0216 21:25:55.290109 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 21:25:55.298355 master-0 kubenswrapper[38936]: I0216 21:25:55.298263 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 21:25:55.311965 master-0 kubenswrapper[38936]: I0216 21:25:55.311903 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 21:25:55.325138 master-0 kubenswrapper[38936]: I0216 21:25:55.325070 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x77sl" Feb 16 21:25:55.333317 master-0 kubenswrapper[38936]: I0216 21:25:55.333280 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 21:25:55.359140 master-0 kubenswrapper[38936]: I0216 21:25:55.358966 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 21:25:55.394990 master-0 kubenswrapper[38936]: I0216 21:25:55.394888 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 21:25:55.583962 master-0 kubenswrapper[38936]: I0216 21:25:55.583891 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 21:25:55.692987 master-0 kubenswrapper[38936]: I0216 21:25:55.692864 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 21:25:55.725115 master-0 kubenswrapper[38936]: I0216 21:25:55.725057 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 21:25:55.909563 master-0 kubenswrapper[38936]: I0216 21:25:55.909497 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 21:25:55.911366 master-0 kubenswrapper[38936]: I0216 21:25:55.911338 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 21:25:55.956309 master-0 kubenswrapper[38936]: I0216 21:25:55.956183 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 21:25:56.008289 master-0 kubenswrapper[38936]: I0216 21:25:56.008231 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mz2hl" Feb 16 21:25:56.018298 master-0 kubenswrapper[38936]: I0216 21:25:56.018247 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 21:25:56.035524 master-0 kubenswrapper[38936]: I0216 21:25:56.035420 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 21:25:56.073243 master-0 kubenswrapper[38936]: I0216 21:25:56.073112 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 21:25:56.085695 master-0 kubenswrapper[38936]: I0216 21:25:56.085550 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 21:25:56.232338 master-0 kubenswrapper[38936]: I0216 21:25:56.232100 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 21:25:56.263312 master-0 kubenswrapper[38936]: I0216 21:25:56.263249 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 21:25:56.295799 master-0 kubenswrapper[38936]: I0216 21:25:56.295744 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 21:25:56.304601 master-0 kubenswrapper[38936]: I0216 21:25:56.304545 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 21:25:56.362252 master-0 kubenswrapper[38936]: I0216 21:25:56.362215 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 21:25:56.397539 master-0 kubenswrapper[38936]: I0216 21:25:56.397493 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-q7pxf" Feb 16 21:25:56.422543 master-0 kubenswrapper[38936]: I0216 21:25:56.422479 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 21:25:56.485198 master-0 kubenswrapper[38936]: I0216 21:25:56.485077 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 21:25:56.625271 master-0 kubenswrapper[38936]: I0216 21:25:56.625216 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 21:25:56.672866 master-0 kubenswrapper[38936]: I0216 21:25:56.672816 38936 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 21:25:56.673639 master-0 kubenswrapper[38936]: I0216 21:25:56.673573 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=36.786780498 podStartE2EDuration="46.673553425s" podCreationTimestamp="2026-02-16 21:25:10 +0000 UTC" firstStartedPulling="2026-02-16 21:25:12.38683849 +0000 UTC m=+142.738841852" lastFinishedPulling="2026-02-16 21:25:22.273611397 +0000 UTC m=+152.625614779" observedRunningTime="2026-02-16 21:25:40.548082476 +0000 UTC m=+170.900085838" watchObservedRunningTime="2026-02-16 21:25:56.673553425 +0000 UTC m=+187.025556787" Feb 16 21:25:56.677436 master-0 kubenswrapper[38936]: I0216 21:25:56.677380 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=39.174461256 podStartE2EDuration="46.67736434s" podCreationTimestamp="2026-02-16 21:25:10 +0000 UTC" firstStartedPulling="2026-02-16 21:25:12.300846731 +0000 UTC m=+142.652850093" lastFinishedPulling="2026-02-16 21:25:19.803749815 +0000 UTC m=+150.155753177" observedRunningTime="2026-02-16 21:25:40.589302668 +0000 UTC m=+170.941306020" watchObservedRunningTime="2026-02-16 21:25:56.67736434 +0000 UTC m=+187.029367702" Feb 16 21:25:56.679664 master-0 kubenswrapper[38936]: I0216 21:25:56.679615 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-f886f46f4-gz92q" podStartSLOduration=40.122609117 podStartE2EDuration="46.679606446s" podCreationTimestamp="2026-02-16 21:25:10 +0000 UTC" firstStartedPulling="2026-02-16 21:25:12.28512143 +0000 UTC m=+142.637124792" lastFinishedPulling="2026-02-16 21:25:18.842118769 +0000 UTC m=+149.194122121" observedRunningTime="2026-02-16 21:25:40.512926665 +0000 UTC m=+170.864930027" watchObservedRunningTime="2026-02-16 21:25:56.679606446 +0000 UTC m=+187.031609808" Feb 16 21:25:56.683087 master-0 kubenswrapper[38936]: I0216 21:25:56.680936 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=37.680925948 podStartE2EDuration="37.680925948s" podCreationTimestamp="2026-02-16 21:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:25:40.436634915 +0000 UTC m=+170.788638297" watchObservedRunningTime="2026-02-16 21:25:56.680925948 +0000 UTC m=+187.032929310" Feb 16 21:25:56.683087 master-0 kubenswrapper[38936]: I0216 21:25:56.682025 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-5c88849d7d-xfnmp","openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 21:25:56.683087 master-0 kubenswrapper[38936]: I0216 21:25:56.682092 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 21:25:56.687332 master-0 kubenswrapper[38936]: I0216 21:25:56.687278 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 21:25:56.690133 master-0 kubenswrapper[38936]: I0216 21:25:56.690101 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 21:25:56.705201 master-0 kubenswrapper[38936]: I0216 21:25:56.705129 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=16.705019175 podStartE2EDuration="16.705019175s" podCreationTimestamp="2026-02-16 21:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:25:56.701255672 +0000 UTC m=+187.053259034" watchObservedRunningTime="2026-02-16 21:25:56.705019175 +0000 UTC m=+187.057022537" Feb 16 21:25:56.708983 master-0 kubenswrapper[38936]: I0216 21:25:56.708936 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 21:25:56.854491 master-0 kubenswrapper[38936]: I0216 21:25:56.854441 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 21:25:56.863662 master-0 kubenswrapper[38936]: I0216 21:25:56.863602 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 21:25:56.935535 master-0 kubenswrapper[38936]: I0216 21:25:56.935484 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 21:25:56.967982 master-0 kubenswrapper[38936]: I0216 21:25:56.967926 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 21:25:56.998086 master-0 kubenswrapper[38936]: I0216 21:25:56.998031 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 21:25:57.100302 master-0 kubenswrapper[38936]: I0216 21:25:57.100249 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-zlh9q" Feb 16 21:25:57.102961 master-0 kubenswrapper[38936]: I0216 21:25:57.102930 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 21:25:57.126294 master-0 kubenswrapper[38936]: I0216 21:25:57.126173 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 16 21:25:57.165486 master-0 kubenswrapper[38936]: I0216 21:25:57.165438 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-2mlkm" Feb 16 21:25:57.190329 master-0 kubenswrapper[38936]: I0216 21:25:57.190294 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 21:25:57.248479 master-0 kubenswrapper[38936]: I0216 21:25:57.248430 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 21:25:57.301911 master-0 kubenswrapper[38936]: I0216 21:25:57.301873 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 21:25:57.304097 master-0 kubenswrapper[38936]: I0216 21:25:57.304064 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 21:25:57.309399 master-0 kubenswrapper[38936]: I0216 21:25:57.309370 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:25:57.310166 master-0 kubenswrapper[38936]: I0216 21:25:57.310148 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 21:25:57.314496 master-0 kubenswrapper[38936]: I0216 21:25:57.314481 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 21:25:57.339001 master-0 kubenswrapper[38936]: I0216 21:25:57.338956 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-lbttq" Feb 16 21:25:57.369320 master-0 kubenswrapper[38936]: I0216 21:25:57.369142 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 21:25:57.376561 master-0 kubenswrapper[38936]: I0216 21:25:57.376487 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 21:25:57.435947 master-0 kubenswrapper[38936]: I0216 21:25:57.435904 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 21:25:57.448929 master-0 kubenswrapper[38936]: I0216 21:25:57.448869 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 21:25:57.471257 master-0 kubenswrapper[38936]: I0216 21:25:57.471216 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 21:25:57.471888 master-0 kubenswrapper[38936]: I0216 21:25:57.471849 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 21:25:57.493742 master-0 kubenswrapper[38936]: I0216 21:25:57.493613 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:25:57.493844 master-0 kubenswrapper[38936]: I0216 21:25:57.493806 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:25:57.541733 master-0 kubenswrapper[38936]: I0216 21:25:57.541672 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 21:25:57.603354 master-0 kubenswrapper[38936]: I0216 21:25:57.603314 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 21:25:57.655070 master-0 kubenswrapper[38936]: I0216 21:25:57.654948 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-a3un9as7vf9sv" Feb 16 21:25:57.658805 master-0 kubenswrapper[38936]: I0216 21:25:57.658773 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 21:25:57.685206 master-0 kubenswrapper[38936]: I0216 21:25:57.685144 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 21:25:57.703574 master-0 kubenswrapper[38936]: I0216 21:25:57.703487 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 21:25:57.763783 master-0 kubenswrapper[38936]: I0216 21:25:57.763454 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-jswsr" Feb 16 21:25:57.792025 master-0 kubenswrapper[38936]: I0216 21:25:57.791988 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-zhm6n" Feb 16 21:25:57.849178 master-0 kubenswrapper[38936]: I0216 21:25:57.849125 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 21:25:57.886421 master-0 kubenswrapper[38936]: I0216 21:25:57.886378 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f9f317-a78e-4d18-b1c1-882631cfc6eb" path="/var/lib/kubelet/pods/46f9f317-a78e-4d18-b1c1-882631cfc6eb/volumes" Feb 16 21:25:57.908286 master-0 kubenswrapper[38936]: I0216 21:25:57.908186 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 21:25:57.919778 master-0 kubenswrapper[38936]: I0216 21:25:57.919727 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 21:25:57.981006 master-0 kubenswrapper[38936]: I0216 21:25:57.980942 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 21:25:57.990595 master-0 kubenswrapper[38936]: I0216 21:25:57.990326 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 21:25:58.061798 master-0 kubenswrapper[38936]: I0216 21:25:58.061746 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 21:25:58.074537 master-0 kubenswrapper[38936]: I0216 21:25:58.073875 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 21:25:58.081600 master-0 kubenswrapper[38936]: I0216 21:25:58.081556 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 21:25:58.117773 master-0 kubenswrapper[38936]: I0216 21:25:58.117716 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-z2nzd" Feb 16 21:25:58.156524 master-0 kubenswrapper[38936]: I0216 21:25:58.156478 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 21:25:58.266700 master-0 kubenswrapper[38936]: I0216 21:25:58.266583 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 21:25:58.316326 master-0 kubenswrapper[38936]: I0216 21:25:58.316273 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-c0v76jahdu8si" Feb 16 21:25:58.333076 master-0 kubenswrapper[38936]: I0216 21:25:58.333028 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 21:25:58.397915 master-0 kubenswrapper[38936]: I0216 21:25:58.397838 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 21:25:58.422867 master-0 kubenswrapper[38936]: I0216 21:25:58.422206 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 21:25:58.449803 master-0 kubenswrapper[38936]: I0216 21:25:58.449764 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 21:25:58.477451 master-0 kubenswrapper[38936]: I0216 21:25:58.477401 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 21:25:58.488162 master-0 kubenswrapper[38936]: I0216 21:25:58.488134 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 21:25:58.538799 master-0 kubenswrapper[38936]: I0216 21:25:58.538626 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 21:25:58.593144 master-0 kubenswrapper[38936]: I0216 21:25:58.593057 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 21:25:58.602799 master-0 kubenswrapper[38936]: I0216 21:25:58.602761 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-rmw54" Feb 16 21:25:58.624662 master-0 kubenswrapper[38936]: I0216 21:25:58.624558 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 21:25:58.649578 master-0 kubenswrapper[38936]: I0216 21:25:58.649504 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 21:25:58.721221 master-0 kubenswrapper[38936]: I0216 21:25:58.721001 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 21:25:58.721731 master-0 kubenswrapper[38936]: I0216 21:25:58.721274 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 21:25:58.792682 master-0 kubenswrapper[38936]: I0216 21:25:58.792442 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 21:25:58.825570 master-0 kubenswrapper[38936]: I0216 21:25:58.825028 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 21:25:58.850970 master-0 kubenswrapper[38936]: I0216 21:25:58.850902 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 21:25:58.876696 master-0 kubenswrapper[38936]: I0216 21:25:58.876594 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 21:25:58.975773 master-0 kubenswrapper[38936]: I0216 21:25:58.975690 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 21:25:59.062755 master-0 kubenswrapper[38936]: I0216 21:25:59.062561 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 21:25:59.108672 master-0 kubenswrapper[38936]: I0216 21:25:59.108594 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 21:25:59.142451 master-0 kubenswrapper[38936]: I0216 21:25:59.142383 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 21:25:59.195308 master-0 kubenswrapper[38936]: I0216 21:25:59.195248 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 21:25:59.347473 master-0 kubenswrapper[38936]: I0216 21:25:59.347355 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 21:25:59.431184 master-0 kubenswrapper[38936]: I0216 21:25:59.431117 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 21:25:59.457971 master-0 kubenswrapper[38936]: I0216 21:25:59.457919 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-6xcjr" Feb 16 21:25:59.484641 master-0 kubenswrapper[38936]: I0216 21:25:59.484563 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 21:25:59.522850 master-0 kubenswrapper[38936]: I0216 21:25:59.522806 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 21:25:59.668744 master-0 kubenswrapper[38936]: I0216 21:25:59.668610 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 21:25:59.680606 master-0 kubenswrapper[38936]: I0216 21:25:59.680540 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 21:25:59.682858 master-0 kubenswrapper[38936]: I0216 21:25:59.682815 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 21:25:59.690386 master-0 kubenswrapper[38936]: I0216 21:25:59.690336 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 21:25:59.834639 master-0 kubenswrapper[38936]: I0216 21:25:59.834596 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 21:25:59.884597 master-0 kubenswrapper[38936]: I0216 21:25:59.884545 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 21:25:59.960226 master-0 kubenswrapper[38936]: I0216 21:25:59.960122 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 21:25:59.964732 master-0 kubenswrapper[38936]: I0216 21:25:59.964679 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 21:25:59.994615 master-0 kubenswrapper[38936]: I0216 21:25:59.994578 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 21:26:00.007580 master-0 kubenswrapper[38936]: I0216 21:26:00.007533 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 21:26:00.054672 master-0 kubenswrapper[38936]: I0216 21:26:00.054614 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 21:26:00.060908 master-0 kubenswrapper[38936]: I0216 21:26:00.060664 38936 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 21:26:00.125697 master-0 kubenswrapper[38936]: I0216 21:26:00.125606 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 21:26:00.136564 master-0 kubenswrapper[38936]: I0216 21:26:00.136500 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 21:26:00.155958 master-0 kubenswrapper[38936]: I0216 21:26:00.155908 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 21:26:00.192993 master-0 kubenswrapper[38936]: I0216 21:26:00.192935 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 21:26:00.215824 master-0 kubenswrapper[38936]: I0216 21:26:00.215696 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 21:26:00.224892 master-0 kubenswrapper[38936]: I0216 21:26:00.224841 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 21:26:00.246861 master-0 kubenswrapper[38936]: I0216 21:26:00.246812 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 21:26:00.261506 master-0 kubenswrapper[38936]: I0216 21:26:00.261449 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 16 21:26:00.274962 master-0 kubenswrapper[38936]: I0216 21:26:00.274941 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 21:26:00.283769 master-0 kubenswrapper[38936]: I0216 21:26:00.283734 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 21:26:00.394053 master-0 kubenswrapper[38936]: I0216 21:26:00.393979 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 21:26:00.420610 master-0 kubenswrapper[38936]: I0216 21:26:00.420555 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 21:26:00.431164 master-0 kubenswrapper[38936]: I0216 21:26:00.431126 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 21:26:00.458887 master-0 kubenswrapper[38936]: I0216 21:26:00.458784 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 21:26:00.467910 master-0 kubenswrapper[38936]: I0216 21:26:00.467792 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 21:26:00.471993 master-0 kubenswrapper[38936]: I0216 21:26:00.471955 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-5tbmx" Feb 16 21:26:00.510772 master-0 kubenswrapper[38936]: I0216 21:26:00.510703 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 21:26:00.543172 master-0 kubenswrapper[38936]: I0216 21:26:00.543086 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 21:26:00.576324 master-0 kubenswrapper[38936]: I0216 21:26:00.576258 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 21:26:00.596112 master-0 kubenswrapper[38936]: I0216 21:26:00.596069 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 21:26:00.693869 master-0 kubenswrapper[38936]: I0216 21:26:00.693786 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-4brnj" Feb 16 21:26:00.735643 master-0 kubenswrapper[38936]: I0216 21:26:00.735482 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 21:26:00.766547 master-0 kubenswrapper[38936]: I0216 21:26:00.766472 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:26:00.900431 master-0 kubenswrapper[38936]: I0216 21:26:00.900385 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 21:26:00.952964 master-0 kubenswrapper[38936]: I0216 21:26:00.952888 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 21:26:00.965909 master-0 kubenswrapper[38936]: I0216 21:26:00.965846 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 21:26:00.981861 master-0 kubenswrapper[38936]: I0216 21:26:00.981799 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 21:26:00.988877 master-0 kubenswrapper[38936]: I0216 21:26:00.988761 38936 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 21:26:01.008738 master-0 kubenswrapper[38936]: I0216 21:26:01.008644 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vqmt8" Feb 16 21:26:01.023453 master-0 kubenswrapper[38936]: I0216 21:26:01.023399 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 21:26:01.025707 master-0 kubenswrapper[38936]: I0216 21:26:01.025636 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 21:26:01.049824 master-0 kubenswrapper[38936]: I0216 21:26:01.049774 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 21:26:01.095388 master-0 kubenswrapper[38936]: I0216 21:26:01.095349 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 21:26:01.099703 master-0 kubenswrapper[38936]: I0216 21:26:01.099666 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 21:26:01.129987 master-0 kubenswrapper[38936]: I0216 21:26:01.129915 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 21:26:01.190382 master-0 kubenswrapper[38936]: I0216 21:26:01.190336 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 21:26:01.222179 master-0 kubenswrapper[38936]: I0216 21:26:01.222135 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 21:26:01.225361 master-0 kubenswrapper[38936]: I0216 21:26:01.225322 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 21:26:01.260483 master-0 kubenswrapper[38936]: I0216 21:26:01.260420 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 21:26:01.260874 master-0 kubenswrapper[38936]: I0216 21:26:01.260521 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 21:26:01.262718 master-0 kubenswrapper[38936]: I0216 21:26:01.262526 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 21:26:01.274571 master-0 kubenswrapper[38936]: I0216 21:26:01.274531 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 21:26:01.281884 master-0 kubenswrapper[38936]: I0216 21:26:01.281839 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 21:26:01.414135 master-0 kubenswrapper[38936]: I0216 21:26:01.414073 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 21:26:01.455580 master-0 kubenswrapper[38936]: I0216 21:26:01.455526 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 16 21:26:01.470507 master-0 kubenswrapper[38936]: I0216 21:26:01.470354 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 21:26:01.504021 master-0 kubenswrapper[38936]: I0216 21:26:01.503943 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 21:26:01.513599 master-0 kubenswrapper[38936]: I0216 21:26:01.513471 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-89d7ddf6d-l48q5"] Feb 16 21:26:01.513946 master-0 kubenswrapper[38936]: E0216 21:26:01.513899 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6862f5f5-da61-4347-9a9e-cb47b7e1261f" containerName="installer" Feb 16 21:26:01.513946 master-0 kubenswrapper[38936]: I0216 21:26:01.513923 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="6862f5f5-da61-4347-9a9e-cb47b7e1261f" containerName="installer" Feb 16 21:26:01.514061 master-0 kubenswrapper[38936]: E0216 21:26:01.513982 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f9f317-a78e-4d18-b1c1-882631cfc6eb" containerName="oauth-openshift" Feb 16 21:26:01.514061 master-0 kubenswrapper[38936]: I0216 21:26:01.514011 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f9f317-a78e-4d18-b1c1-882631cfc6eb" containerName="oauth-openshift" Feb 16 21:26:01.514329 master-0 kubenswrapper[38936]: I0216 21:26:01.514294 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f9f317-a78e-4d18-b1c1-882631cfc6eb" containerName="oauth-openshift" Feb 16 21:26:01.514640 master-0 kubenswrapper[38936]: I0216 21:26:01.514601 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="6862f5f5-da61-4347-9a9e-cb47b7e1261f" containerName="installer" Feb 16 21:26:01.515476 master-0 kubenswrapper[38936]: I0216 21:26:01.515391 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.520984 master-0 kubenswrapper[38936]: I0216 21:26:01.520715 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-2t2pz" Feb 16 21:26:01.520984 master-0 kubenswrapper[38936]: I0216 21:26:01.520786 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 21:26:01.520984 master-0 kubenswrapper[38936]: I0216 21:26:01.520913 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 21:26:01.520984 master-0 kubenswrapper[38936]: I0216 21:26:01.520949 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 21:26:01.521393 master-0 kubenswrapper[38936]: I0216 21:26:01.521094 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 21:26:01.521986 master-0 kubenswrapper[38936]: I0216 21:26:01.521850 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 21:26:01.522437 master-0 kubenswrapper[38936]: I0216 21:26:01.522125 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 21:26:01.522437 master-0 kubenswrapper[38936]: I0216 21:26:01.522250 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 21:26:01.522437 master-0 kubenswrapper[38936]: I0216 21:26:01.522271 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 21:26:01.522437 master-0 kubenswrapper[38936]: I0216 21:26:01.522295 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 21:26:01.522437 master-0 kubenswrapper[38936]: I0216 21:26:01.522304 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 21:26:01.522437 master-0 kubenswrapper[38936]: I0216 21:26:01.522333 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 21:26:01.527058 master-0 kubenswrapper[38936]: I0216 21:26:01.527008 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-89d7ddf6d-l48q5"] Feb 16 21:26:01.527656 master-0 kubenswrapper[38936]: I0216 21:26:01.527617 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 21:26:01.529694 master-0 kubenswrapper[38936]: I0216 21:26:01.529581 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-gpdzh" Feb 16 21:26:01.533007 master-0 kubenswrapper[38936]: I0216 21:26:01.532347 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 21:26:01.559728 master-0 kubenswrapper[38936]: I0216 21:26:01.559273 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 21:26:01.588448 master-0 kubenswrapper[38936]: I0216 21:26:01.588386 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 21:26:01.624019 master-0 kubenswrapper[38936]: I0216 21:26:01.623959 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-service-ca\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624019 master-0 kubenswrapper[38936]: I0216 21:26:01.624015 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0699e005-3049-4dbe-8b68-bbdeae5c9174-audit-dir\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624266 master-0 kubenswrapper[38936]: I0216 21:26:01.624036 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-user-template-login\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624266 master-0 kubenswrapper[38936]: I0216 21:26:01.624061 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-cliconfig\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624266 master-0 kubenswrapper[38936]: I0216 21:26:01.624083 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624266 master-0 kubenswrapper[38936]: I0216 21:26:01.624103 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-serving-cert\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624266 master-0 kubenswrapper[38936]: I0216 21:26:01.624124 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-session\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624266 master-0 kubenswrapper[38936]: I0216 21:26:01.624162 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-router-certs\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624266 master-0 kubenswrapper[38936]: I0216 21:26:01.624185 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-audit-policies\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624266 master-0 kubenswrapper[38936]: I0216 21:26:01.624212 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-user-template-error\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624266 master-0 kubenswrapper[38936]: I0216 21:26:01.624238 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624266 master-0 kubenswrapper[38936]: I0216 21:26:01.624254 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.624569 master-0 kubenswrapper[38936]: I0216 21:26:01.624281 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkgmm\" (UniqueName: \"kubernetes.io/projected/0699e005-3049-4dbe-8b68-bbdeae5c9174-kube-api-access-nkgmm\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.645265 master-0 kubenswrapper[38936]: I0216 21:26:01.645109 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 21:26:01.680569 master-0 kubenswrapper[38936]: I0216 21:26:01.680503 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 21:26:01.726150 master-0 kubenswrapper[38936]: I0216 21:26:01.726082 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-cliconfig\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.726350 master-0 kubenswrapper[38936]: I0216 21:26:01.726263 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.726350 master-0 kubenswrapper[38936]: I0216 21:26:01.726290 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-serving-cert\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.726350 master-0 kubenswrapper[38936]: I0216 21:26:01.726314 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-session\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.726467 master-0 kubenswrapper[38936]: I0216 21:26:01.726374 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-router-certs\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.726677 master-0 kubenswrapper[38936]: I0216 21:26:01.726628 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-audit-policies\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.726725 master-0 kubenswrapper[38936]: I0216 21:26:01.726700 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-user-template-error\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.726760 master-0 kubenswrapper[38936]: I0216 21:26:01.726740 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.726842 master-0 kubenswrapper[38936]: I0216 21:26:01.726770 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.727104 master-0 kubenswrapper[38936]: I0216 21:26:01.727049 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-cliconfig\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.727104 master-0 kubenswrapper[38936]: I0216 21:26:01.727086 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkgmm\" (UniqueName: \"kubernetes.io/projected/0699e005-3049-4dbe-8b68-bbdeae5c9174-kube-api-access-nkgmm\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.727227 master-0 kubenswrapper[38936]: I0216 21:26:01.727189 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-service-ca\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.727279 master-0 kubenswrapper[38936]: I0216 21:26:01.727245 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0699e005-3049-4dbe-8b68-bbdeae5c9174-audit-dir\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.727279 master-0 kubenswrapper[38936]: I0216 21:26:01.727273 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-user-template-login\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.727419 master-0 kubenswrapper[38936]: I0216 21:26:01.727358 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0699e005-3049-4dbe-8b68-bbdeae5c9174-audit-dir\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.727900 master-0 kubenswrapper[38936]: I0216 21:26:01.727872 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-service-ca\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.728206 master-0 kubenswrapper[38936]: I0216 21:26:01.728172 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-audit-policies\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.728606 master-0 kubenswrapper[38936]: I0216 21:26:01.728564 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.729642 master-0 kubenswrapper[38936]: I0216 21:26:01.729612 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-serving-cert\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.730051 master-0 kubenswrapper[38936]: I0216 21:26:01.730023 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.730586 master-0 kubenswrapper[38936]: I0216 21:26:01.730548 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-user-template-login\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.730775 master-0 kubenswrapper[38936]: I0216 21:26:01.730750 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-session\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.730849 master-0 kubenswrapper[38936]: I0216 21:26:01.730805 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-router-certs\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.731125 master-0 kubenswrapper[38936]: I0216 21:26:01.731058 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.731343 master-0 kubenswrapper[38936]: I0216 21:26:01.731317 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0699e005-3049-4dbe-8b68-bbdeae5c9174-v4-0-config-user-template-error\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.744997 master-0 kubenswrapper[38936]: I0216 21:26:01.744917 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkgmm\" (UniqueName: \"kubernetes.io/projected/0699e005-3049-4dbe-8b68-bbdeae5c9174-kube-api-access-nkgmm\") pod \"oauth-openshift-89d7ddf6d-l48q5\" (UID: \"0699e005-3049-4dbe-8b68-bbdeae5c9174\") " pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.800983 master-0 kubenswrapper[38936]: I0216 21:26:01.800867 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 21:26:01.821545 master-0 kubenswrapper[38936]: I0216 21:26:01.821480 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 21:26:01.843496 master-0 kubenswrapper[38936]: I0216 21:26:01.843425 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:01.971783 master-0 kubenswrapper[38936]: I0216 21:26:01.968133 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 21:26:01.986763 master-0 kubenswrapper[38936]: I0216 21:26:01.986552 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 21:26:01.988396 master-0 kubenswrapper[38936]: I0216 21:26:01.988349 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 21:26:02.007476 master-0 kubenswrapper[38936]: I0216 21:26:02.007427 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 21:26:02.026220 master-0 kubenswrapper[38936]: I0216 21:26:02.026167 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 16 21:26:02.051265 master-0 kubenswrapper[38936]: I0216 21:26:02.051179 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 21:26:02.052488 master-0 kubenswrapper[38936]: I0216 21:26:02.052311 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 21:26:02.054142 master-0 kubenswrapper[38936]: I0216 21:26:02.054015 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-pt7pr" Feb 16 21:26:02.183245 master-0 kubenswrapper[38936]: I0216 21:26:02.183171 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 21:26:02.250950 master-0 kubenswrapper[38936]: I0216 21:26:02.250843 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 21:26:02.265776 master-0 kubenswrapper[38936]: I0216 21:26:02.265725 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-89d7ddf6d-l48q5"] Feb 16 21:26:02.282148 master-0 kubenswrapper[38936]: I0216 21:26:02.282108 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 21:26:02.321143 master-0 kubenswrapper[38936]: I0216 21:26:02.321085 38936 patch_prober.go:28] interesting pod/console-7dcddfd95-nldpw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Feb 16 21:26:02.321344 master-0 kubenswrapper[38936]: I0216 21:26:02.321159 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" probeResult="failure" output="Get \"https://10.128.0.93:8443/health\": dial tcp 10.128.0.93:8443: connect: connection refused" Feb 16 21:26:02.342367 master-0 kubenswrapper[38936]: I0216 21:26:02.342311 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 21:26:02.344240 master-0 kubenswrapper[38936]: I0216 21:26:02.344052 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 21:26:02.345943 master-0 kubenswrapper[38936]: I0216 21:26:02.345751 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 21:26:02.394660 master-0 kubenswrapper[38936]: I0216 21:26:02.393866 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 21:26:02.402184 master-0 kubenswrapper[38936]: I0216 21:26:02.402134 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 21:26:02.447666 master-0 kubenswrapper[38936]: I0216 21:26:02.447606 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 21:26:02.463589 master-0 kubenswrapper[38936]: I0216 21:26:02.463558 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 21:26:02.531868 master-0 kubenswrapper[38936]: I0216 21:26:02.531818 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 21:26:02.637050 master-0 kubenswrapper[38936]: I0216 21:26:02.636928 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 21:26:02.764331 master-0 kubenswrapper[38936]: I0216 21:26:02.764267 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 21:26:02.765132 master-0 kubenswrapper[38936]: I0216 21:26:02.765101 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 16 21:26:02.863827 master-0 kubenswrapper[38936]: I0216 21:26:02.863766 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 21:26:02.871810 master-0 kubenswrapper[38936]: I0216 21:26:02.871752 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 21:26:02.951120 master-0 kubenswrapper[38936]: I0216 21:26:02.951003 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 21:26:02.995389 master-0 kubenswrapper[38936]: I0216 21:26:02.995335 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 21:26:03.071803 master-0 kubenswrapper[38936]: I0216 21:26:03.071750 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" event={"ID":"0699e005-3049-4dbe-8b68-bbdeae5c9174","Type":"ContainerStarted","Data":"ca58f0bbca17a00a9ae4f60d29700608693fc61ad9de309b7e24280829b35cf8"} Feb 16 21:26:03.071803 master-0 kubenswrapper[38936]: I0216 21:26:03.071801 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" event={"ID":"0699e005-3049-4dbe-8b68-bbdeae5c9174","Type":"ContainerStarted","Data":"064b63413e63582f2d6b0f3a69c6e0d01f2f6dceeeeb59631d879005d02821e6"} Feb 16 21:26:03.072097 master-0 kubenswrapper[38936]: I0216 21:26:03.072067 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:03.077205 master-0 kubenswrapper[38936]: I0216 21:26:03.077154 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" Feb 16 21:26:03.099228 master-0 kubenswrapper[38936]: I0216 21:26:03.099172 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 21:26:03.108742 master-0 kubenswrapper[38936]: I0216 21:26:03.108658 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-89d7ddf6d-l48q5" podStartSLOduration=52.108638076 podStartE2EDuration="52.108638076s" podCreationTimestamp="2026-02-16 21:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:26:03.101335756 +0000 UTC m=+193.453339118" watchObservedRunningTime="2026-02-16 21:26:03.108638076 +0000 UTC m=+193.460641438" Feb 16 21:26:03.109993 master-0 kubenswrapper[38936]: I0216 21:26:03.109965 38936 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 21:26:03.110198 master-0 kubenswrapper[38936]: I0216 21:26:03.110171 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" containerID="cri-o://a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad" gracePeriod=5 Feb 16 21:26:03.118719 master-0 kubenswrapper[38936]: I0216 21:26:03.118665 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 21:26:03.158492 master-0 kubenswrapper[38936]: I0216 21:26:03.158440 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 21:26:03.365943 master-0 kubenswrapper[38936]: I0216 21:26:03.365888 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 21:26:03.432811 master-0 kubenswrapper[38936]: I0216 21:26:03.432761 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 21:26:03.566367 master-0 kubenswrapper[38936]: I0216 21:26:03.566285 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 21:26:03.602691 master-0 kubenswrapper[38936]: I0216 21:26:03.602157 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 21:26:03.758370 master-0 kubenswrapper[38936]: I0216 21:26:03.758330 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 21:26:03.799262 master-0 kubenswrapper[38936]: I0216 21:26:03.799197 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 21:26:03.973523 master-0 kubenswrapper[38936]: I0216 21:26:03.973458 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 21:26:03.975172 master-0 kubenswrapper[38936]: I0216 21:26:03.975137 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 21:26:03.982938 master-0 kubenswrapper[38936]: I0216 21:26:03.982897 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 21:26:04.000217 master-0 kubenswrapper[38936]: I0216 21:26:04.000186 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 21:26:04.138533 master-0 kubenswrapper[38936]: I0216 21:26:04.138413 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 21:26:04.185637 master-0 kubenswrapper[38936]: I0216 21:26:04.185555 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-lxr8m" Feb 16 21:26:04.202640 master-0 kubenswrapper[38936]: I0216 21:26:04.202544 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 21:26:04.215940 master-0 kubenswrapper[38936]: I0216 21:26:04.215846 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 21:26:04.275314 master-0 kubenswrapper[38936]: I0216 21:26:04.275231 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 21:26:04.308018 master-0 kubenswrapper[38936]: I0216 21:26:04.307962 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 21:26:04.426972 master-0 kubenswrapper[38936]: I0216 21:26:04.426823 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 21:26:04.645492 master-0 kubenswrapper[38936]: I0216 21:26:04.645416 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 21:26:04.682406 master-0 kubenswrapper[38936]: I0216 21:26:04.682263 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 21:26:04.692038 master-0 kubenswrapper[38936]: I0216 21:26:04.691991 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 21:26:04.753976 master-0 kubenswrapper[38936]: I0216 21:26:04.753896 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 21:26:04.768574 master-0 kubenswrapper[38936]: I0216 21:26:04.768510 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lhkmd" Feb 16 21:26:04.798969 master-0 kubenswrapper[38936]: I0216 21:26:04.798910 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 21:26:04.925990 master-0 kubenswrapper[38936]: I0216 21:26:04.925909 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 21:26:04.963962 master-0 kubenswrapper[38936]: I0216 21:26:04.963810 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 21:26:05.019920 master-0 kubenswrapper[38936]: I0216 21:26:05.019848 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 21:26:05.118304 master-0 kubenswrapper[38936]: I0216 21:26:05.116611 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 21:26:05.145914 master-0 kubenswrapper[38936]: I0216 21:26:05.145856 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 21:26:05.176470 master-0 kubenswrapper[38936]: I0216 21:26:05.176397 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 21:26:05.279486 master-0 kubenswrapper[38936]: I0216 21:26:05.279434 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 21:26:05.324301 master-0 kubenswrapper[38936]: I0216 21:26:05.324223 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 21:26:05.448994 master-0 kubenswrapper[38936]: I0216 21:26:05.448906 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 21:26:05.465678 master-0 kubenswrapper[38936]: I0216 21:26:05.465609 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-qjg6f" Feb 16 21:26:05.886499 master-0 kubenswrapper[38936]: I0216 21:26:05.886455 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 21:26:05.977405 master-0 kubenswrapper[38936]: I0216 21:26:05.977356 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-knxzz" Feb 16 21:26:06.153165 master-0 kubenswrapper[38936]: I0216 21:26:06.153017 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 21:26:06.252534 master-0 kubenswrapper[38936]: I0216 21:26:06.252471 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 21:26:06.331573 master-0 kubenswrapper[38936]: I0216 21:26:06.331478 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 21:26:06.412837 master-0 kubenswrapper[38936]: I0216 21:26:06.412714 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 21:26:06.598332 master-0 kubenswrapper[38936]: I0216 21:26:06.598230 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 21:26:06.636859 master-0 kubenswrapper[38936]: I0216 21:26:06.636786 38936 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 21:26:06.641290 master-0 kubenswrapper[38936]: I0216 21:26:06.641241 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 21:26:06.718804 master-0 kubenswrapper[38936]: I0216 21:26:06.718356 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 21:26:06.871567 master-0 kubenswrapper[38936]: I0216 21:26:06.871465 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 21:26:06.942883 master-0 kubenswrapper[38936]: I0216 21:26:06.941233 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 21:26:07.003211 master-0 kubenswrapper[38936]: I0216 21:26:07.003132 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 21:26:07.149772 master-0 kubenswrapper[38936]: I0216 21:26:07.149732 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 21:26:07.174906 master-0 kubenswrapper[38936]: I0216 21:26:07.174871 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 21:26:07.493449 master-0 kubenswrapper[38936]: I0216 21:26:07.493300 38936 patch_prober.go:28] interesting pod/console-5dbf689d64-pgglg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Feb 16 21:26:07.493449 master-0 kubenswrapper[38936]: I0216 21:26:07.493374 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Feb 16 21:26:07.601742 master-0 kubenswrapper[38936]: I0216 21:26:07.601632 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 21:26:07.604596 master-0 kubenswrapper[38936]: I0216 21:26:07.604547 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 21:26:07.756339 master-0 kubenswrapper[38936]: I0216 21:26:07.756285 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 21:26:08.012505 master-0 kubenswrapper[38936]: I0216 21:26:08.012284 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 21:26:08.197174 master-0 kubenswrapper[38936]: I0216 21:26:08.197125 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 21:26:08.299118 master-0 kubenswrapper[38936]: I0216 21:26:08.298927 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 21:26:08.381793 master-0 kubenswrapper[38936]: I0216 21:26:08.381733 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 21:26:08.692591 master-0 kubenswrapper[38936]: I0216 21:26:08.692533 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_32286c81635de6de1cf7f328273c1a49/startup-monitor/0.log" Feb 16 21:26:08.692941 master-0 kubenswrapper[38936]: I0216 21:26:08.692609 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:26:08.841475 master-0 kubenswrapper[38936]: I0216 21:26:08.841380 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 16 21:26:08.841785 master-0 kubenswrapper[38936]: I0216 21:26:08.841494 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:26:08.841785 master-0 kubenswrapper[38936]: I0216 21:26:08.841505 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 16 21:26:08.841785 master-0 kubenswrapper[38936]: I0216 21:26:08.841598 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 16 21:26:08.841785 master-0 kubenswrapper[38936]: I0216 21:26:08.841692 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 16 21:26:08.841785 master-0 kubenswrapper[38936]: I0216 21:26:08.841743 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock" (OuterVolumeSpecName: "var-lock") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:26:08.841785 master-0 kubenswrapper[38936]: I0216 21:26:08.841779 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log" (OuterVolumeSpecName: "var-log") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:26:08.842068 master-0 kubenswrapper[38936]: I0216 21:26:08.841808 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 16 21:26:08.842068 master-0 kubenswrapper[38936]: I0216 21:26:08.841833 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests" (OuterVolumeSpecName: "manifests") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:26:08.842412 master-0 kubenswrapper[38936]: I0216 21:26:08.842375 38936 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:08.842484 master-0 kubenswrapper[38936]: I0216 21:26:08.842414 38936 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:08.842484 master-0 kubenswrapper[38936]: I0216 21:26:08.842435 38936 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:08.842484 master-0 kubenswrapper[38936]: I0216 21:26:08.842455 38936 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:08.857780 master-0 kubenswrapper[38936]: I0216 21:26:08.857588 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:26:08.944532 master-0 kubenswrapper[38936]: I0216 21:26:08.944389 38936 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:09.127349 master-0 kubenswrapper[38936]: I0216 21:26:09.127274 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_32286c81635de6de1cf7f328273c1a49/startup-monitor/0.log" Feb 16 21:26:09.128048 master-0 kubenswrapper[38936]: I0216 21:26:09.127395 38936 generic.go:334] "Generic (PLEG): container finished" podID="32286c81635de6de1cf7f328273c1a49" containerID="a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad" exitCode=137 Feb 16 21:26:09.128048 master-0 kubenswrapper[38936]: I0216 21:26:09.127478 38936 scope.go:117] "RemoveContainer" containerID="a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad" Feb 16 21:26:09.128048 master-0 kubenswrapper[38936]: I0216 21:26:09.127567 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 21:26:09.157101 master-0 kubenswrapper[38936]: I0216 21:26:09.157052 38936 scope.go:117] "RemoveContainer" containerID="a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad" Feb 16 21:26:09.157732 master-0 kubenswrapper[38936]: E0216 21:26:09.157620 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad\": container with ID starting with a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad not found: ID does not exist" containerID="a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad" Feb 16 21:26:09.157732 master-0 kubenswrapper[38936]: I0216 21:26:09.157697 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad"} err="failed to get container status \"a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad\": rpc error: code = NotFound desc = could not find container \"a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad\": container with ID starting with a57496ea837967c5d008c03839f8820699ee50556c7191b90bd527ade4ba19ad not found: ID does not exist" Feb 16 21:26:09.884865 master-0 kubenswrapper[38936]: I0216 21:26:09.884788 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32286c81635de6de1cf7f328273c1a49" path="/var/lib/kubelet/pods/32286c81635de6de1cf7f328273c1a49/volumes" Feb 16 21:26:09.885211 master-0 kubenswrapper[38936]: I0216 21:26:09.885071 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 16 21:26:09.905149 master-0 kubenswrapper[38936]: I0216 21:26:09.905076 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 21:26:09.905516 master-0 kubenswrapper[38936]: I0216 21:26:09.905481 38936 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="a42f90b6-8fc7-43bf-ae7b-a8eea1c68c0b" Feb 16 21:26:09.911213 master-0 kubenswrapper[38936]: I0216 21:26:09.911132 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 21:26:09.911213 master-0 kubenswrapper[38936]: I0216 21:26:09.911188 38936 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="a42f90b6-8fc7-43bf-ae7b-a8eea1c68c0b" Feb 16 21:26:11.397835 master-0 kubenswrapper[38936]: I0216 21:26:11.397761 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:26:11.428950 master-0 kubenswrapper[38936]: I0216 21:26:11.428909 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:26:12.191051 master-0 kubenswrapper[38936]: I0216 21:26:12.190964 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 21:26:12.325500 master-0 kubenswrapper[38936]: I0216 21:26:12.325372 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:26:12.329469 master-0 kubenswrapper[38936]: I0216 21:26:12.329404 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:26:17.500390 master-0 kubenswrapper[38936]: I0216 21:26:17.500312 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:26:17.505095 master-0 kubenswrapper[38936]: I0216 21:26:17.504957 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:26:17.606490 master-0 kubenswrapper[38936]: I0216 21:26:17.606312 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7dcddfd95-nldpw"] Feb 16 21:26:42.649779 master-0 kubenswrapper[38936]: I0216 21:26:42.649669 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7dcddfd95-nldpw" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" containerID="cri-o://af18d2993ae3387589e2da61f5c3ac7d0eac8cab034fa7f17941a3d802dd5feb" gracePeriod=15 Feb 16 21:26:43.003961 master-0 kubenswrapper[38936]: I0216 21:26:43.003905 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7dcddfd95-nldpw_503aa866-c355-434a-a39c-fa6072733ea8/console/0.log" Feb 16 21:26:43.003961 master-0 kubenswrapper[38936]: I0216 21:26:43.003954 38936 generic.go:334] "Generic (PLEG): container finished" podID="503aa866-c355-434a-a39c-fa6072733ea8" containerID="af18d2993ae3387589e2da61f5c3ac7d0eac8cab034fa7f17941a3d802dd5feb" exitCode=2 Feb 16 21:26:43.004230 master-0 kubenswrapper[38936]: I0216 21:26:43.003983 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dcddfd95-nldpw" event={"ID":"503aa866-c355-434a-a39c-fa6072733ea8","Type":"ContainerDied","Data":"af18d2993ae3387589e2da61f5c3ac7d0eac8cab034fa7f17941a3d802dd5feb"} Feb 16 21:26:43.149015 master-0 kubenswrapper[38936]: I0216 21:26:43.148926 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7dcddfd95-nldpw_503aa866-c355-434a-a39c-fa6072733ea8/console/0.log" Feb 16 21:26:43.149015 master-0 kubenswrapper[38936]: I0216 21:26:43.149026 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:26:43.265597 master-0 kubenswrapper[38936]: I0216 21:26:43.265464 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-trusted-ca-bundle\") pod \"503aa866-c355-434a-a39c-fa6072733ea8\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " Feb 16 21:26:43.265597 master-0 kubenswrapper[38936]: I0216 21:26:43.265535 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-serving-cert\") pod \"503aa866-c355-434a-a39c-fa6072733ea8\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " Feb 16 21:26:43.265597 master-0 kubenswrapper[38936]: I0216 21:26:43.265576 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-oauth-serving-cert\") pod \"503aa866-c355-434a-a39c-fa6072733ea8\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " Feb 16 21:26:43.265957 master-0 kubenswrapper[38936]: I0216 21:26:43.265724 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-oauth-config\") pod \"503aa866-c355-434a-a39c-fa6072733ea8\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " Feb 16 21:26:43.265957 master-0 kubenswrapper[38936]: I0216 21:26:43.265760 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwx2q\" (UniqueName: \"kubernetes.io/projected/503aa866-c355-434a-a39c-fa6072733ea8-kube-api-access-dwx2q\") pod \"503aa866-c355-434a-a39c-fa6072733ea8\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " Feb 16 21:26:43.265957 master-0 kubenswrapper[38936]: I0216 21:26:43.265787 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-service-ca\") pod \"503aa866-c355-434a-a39c-fa6072733ea8\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " Feb 16 21:26:43.265957 master-0 kubenswrapper[38936]: I0216 21:26:43.265810 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-console-config\") pod \"503aa866-c355-434a-a39c-fa6072733ea8\" (UID: \"503aa866-c355-434a-a39c-fa6072733ea8\") " Feb 16 21:26:43.266263 master-0 kubenswrapper[38936]: I0216 21:26:43.266198 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "503aa866-c355-434a-a39c-fa6072733ea8" (UID: "503aa866-c355-434a-a39c-fa6072733ea8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:26:43.266526 master-0 kubenswrapper[38936]: I0216 21:26:43.266493 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-console-config" (OuterVolumeSpecName: "console-config") pod "503aa866-c355-434a-a39c-fa6072733ea8" (UID: "503aa866-c355-434a-a39c-fa6072733ea8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:26:43.266577 master-0 kubenswrapper[38936]: I0216 21:26:43.266549 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "503aa866-c355-434a-a39c-fa6072733ea8" (UID: "503aa866-c355-434a-a39c-fa6072733ea8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:26:43.266895 master-0 kubenswrapper[38936]: I0216 21:26:43.266870 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-service-ca" (OuterVolumeSpecName: "service-ca") pod "503aa866-c355-434a-a39c-fa6072733ea8" (UID: "503aa866-c355-434a-a39c-fa6072733ea8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:26:43.272136 master-0 kubenswrapper[38936]: I0216 21:26:43.268301 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "503aa866-c355-434a-a39c-fa6072733ea8" (UID: "503aa866-c355-434a-a39c-fa6072733ea8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:26:43.272136 master-0 kubenswrapper[38936]: I0216 21:26:43.270536 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/503aa866-c355-434a-a39c-fa6072733ea8-kube-api-access-dwx2q" (OuterVolumeSpecName: "kube-api-access-dwx2q") pod "503aa866-c355-434a-a39c-fa6072733ea8" (UID: "503aa866-c355-434a-a39c-fa6072733ea8"). InnerVolumeSpecName "kube-api-access-dwx2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:26:43.275312 master-0 kubenswrapper[38936]: I0216 21:26:43.275244 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "503aa866-c355-434a-a39c-fa6072733ea8" (UID: "503aa866-c355-434a-a39c-fa6072733ea8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:26:43.368147 master-0 kubenswrapper[38936]: I0216 21:26:43.368050 38936 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:43.368147 master-0 kubenswrapper[38936]: I0216 21:26:43.368123 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwx2q\" (UniqueName: \"kubernetes.io/projected/503aa866-c355-434a-a39c-fa6072733ea8-kube-api-access-dwx2q\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:43.368147 master-0 kubenswrapper[38936]: I0216 21:26:43.368149 38936 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:43.368472 master-0 kubenswrapper[38936]: I0216 21:26:43.368167 38936 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:43.368472 master-0 kubenswrapper[38936]: I0216 21:26:43.368188 38936 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:43.368472 master-0 kubenswrapper[38936]: I0216 21:26:43.368207 38936 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/503aa866-c355-434a-a39c-fa6072733ea8-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:43.368472 master-0 kubenswrapper[38936]: I0216 21:26:43.368227 38936 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/503aa866-c355-434a-a39c-fa6072733ea8-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:26:44.015066 master-0 kubenswrapper[38936]: I0216 21:26:44.015006 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7dcddfd95-nldpw_503aa866-c355-434a-a39c-fa6072733ea8/console/0.log" Feb 16 21:26:44.015786 master-0 kubenswrapper[38936]: I0216 21:26:44.015081 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dcddfd95-nldpw" event={"ID":"503aa866-c355-434a-a39c-fa6072733ea8","Type":"ContainerDied","Data":"af571952b96712d423a80c1e7802ea002bb54b6b874b81c738e367b6a15e642a"} Feb 16 21:26:44.015786 master-0 kubenswrapper[38936]: I0216 21:26:44.015129 38936 scope.go:117] "RemoveContainer" containerID="af18d2993ae3387589e2da61f5c3ac7d0eac8cab034fa7f17941a3d802dd5feb" Feb 16 21:26:44.015786 master-0 kubenswrapper[38936]: I0216 21:26:44.015224 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dcddfd95-nldpw" Feb 16 21:26:44.046385 master-0 kubenswrapper[38936]: I0216 21:26:44.046257 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7dcddfd95-nldpw"] Feb 16 21:26:44.051294 master-0 kubenswrapper[38936]: I0216 21:26:44.051198 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7dcddfd95-nldpw"] Feb 16 21:26:45.888797 master-0 kubenswrapper[38936]: I0216 21:26:45.888500 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="503aa866-c355-434a-a39c-fa6072733ea8" path="/var/lib/kubelet/pods/503aa866-c355-434a-a39c-fa6072733ea8/volumes" Feb 16 21:26:49.516671 master-0 kubenswrapper[38936]: I0216 21:26:49.516550 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-75f89cd5b8-wc2s4"] Feb 16 21:26:49.517607 master-0 kubenswrapper[38936]: E0216 21:26:49.516983 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" Feb 16 21:26:49.517607 master-0 kubenswrapper[38936]: I0216 21:26:49.517004 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" Feb 16 21:26:49.517607 master-0 kubenswrapper[38936]: E0216 21:26:49.517047 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" Feb 16 21:26:49.517607 master-0 kubenswrapper[38936]: I0216 21:26:49.517056 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" Feb 16 21:26:49.517607 master-0 kubenswrapper[38936]: I0216 21:26:49.517222 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="503aa866-c355-434a-a39c-fa6072733ea8" containerName="console" Feb 16 21:26:49.517607 master-0 kubenswrapper[38936]: I0216 21:26:49.517249 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" Feb 16 21:26:49.517977 master-0 kubenswrapper[38936]: I0216 21:26:49.517936 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.528761 master-0 kubenswrapper[38936]: I0216 21:26:49.528694 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75f89cd5b8-wc2s4"] Feb 16 21:26:49.582159 master-0 kubenswrapper[38936]: I0216 21:26:49.582050 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-service-ca\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.582159 master-0 kubenswrapper[38936]: I0216 21:26:49.582118 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-oauth-config\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.582698 master-0 kubenswrapper[38936]: I0216 21:26:49.582350 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-config\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.582698 master-0 kubenswrapper[38936]: I0216 21:26:49.582463 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-oauth-serving-cert\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.582698 master-0 kubenswrapper[38936]: I0216 21:26:49.582525 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-serving-cert\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.582698 master-0 kubenswrapper[38936]: I0216 21:26:49.582626 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-trusted-ca-bundle\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.582698 master-0 kubenswrapper[38936]: I0216 21:26:49.582697 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v87r\" (UniqueName: \"kubernetes.io/projected/e94f9961-bf52-463f-8143-2ec1caa6cdf1-kube-api-access-2v87r\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.684613 master-0 kubenswrapper[38936]: I0216 21:26:49.684518 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-serving-cert\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.685033 master-0 kubenswrapper[38936]: I0216 21:26:49.684758 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-trusted-ca-bundle\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.685033 master-0 kubenswrapper[38936]: I0216 21:26:49.684785 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v87r\" (UniqueName: \"kubernetes.io/projected/e94f9961-bf52-463f-8143-2ec1caa6cdf1-kube-api-access-2v87r\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.688492 master-0 kubenswrapper[38936]: I0216 21:26:49.688444 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-trusted-ca-bundle\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.688759 master-0 kubenswrapper[38936]: I0216 21:26:49.688716 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-service-ca\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.688866 master-0 kubenswrapper[38936]: I0216 21:26:49.688795 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-oauth-config\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.688946 master-0 kubenswrapper[38936]: I0216 21:26:49.688898 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-serving-cert\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.689454 master-0 kubenswrapper[38936]: I0216 21:26:49.689403 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-config\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.689534 master-0 kubenswrapper[38936]: I0216 21:26:49.689488 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-oauth-serving-cert\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.689607 master-0 kubenswrapper[38936]: I0216 21:26:49.689581 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-service-ca\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.690342 master-0 kubenswrapper[38936]: I0216 21:26:49.690302 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-config\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.691050 master-0 kubenswrapper[38936]: I0216 21:26:49.691003 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-oauth-serving-cert\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.691811 master-0 kubenswrapper[38936]: I0216 21:26:49.691758 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-oauth-config\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.701103 master-0 kubenswrapper[38936]: I0216 21:26:49.701022 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v87r\" (UniqueName: \"kubernetes.io/projected/e94f9961-bf52-463f-8143-2ec1caa6cdf1-kube-api-access-2v87r\") pod \"console-75f89cd5b8-wc2s4\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:49.847792 master-0 kubenswrapper[38936]: I0216 21:26:49.847619 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:50.382163 master-0 kubenswrapper[38936]: I0216 21:26:50.381910 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75f89cd5b8-wc2s4"] Feb 16 21:26:50.385461 master-0 kubenswrapper[38936]: W0216 21:26:50.385284 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode94f9961_bf52_463f_8143_2ec1caa6cdf1.slice/crio-154eb1a2522caebf89fc1b6cfb7671c9513139e6d120d3bcef24f161da88b5cf WatchSource:0}: Error finding container 154eb1a2522caebf89fc1b6cfb7671c9513139e6d120d3bcef24f161da88b5cf: Status 404 returned error can't find the container with id 154eb1a2522caebf89fc1b6cfb7671c9513139e6d120d3bcef24f161da88b5cf Feb 16 21:26:51.073314 master-0 kubenswrapper[38936]: I0216 21:26:51.073249 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75f89cd5b8-wc2s4" event={"ID":"e94f9961-bf52-463f-8143-2ec1caa6cdf1","Type":"ContainerStarted","Data":"aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b"} Feb 16 21:26:51.073314 master-0 kubenswrapper[38936]: I0216 21:26:51.073304 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75f89cd5b8-wc2s4" event={"ID":"e94f9961-bf52-463f-8143-2ec1caa6cdf1","Type":"ContainerStarted","Data":"154eb1a2522caebf89fc1b6cfb7671c9513139e6d120d3bcef24f161da88b5cf"} Feb 16 21:26:51.100056 master-0 kubenswrapper[38936]: I0216 21:26:51.099956 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-75f89cd5b8-wc2s4" podStartSLOduration=2.099934082 podStartE2EDuration="2.099934082s" podCreationTimestamp="2026-02-16 21:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:26:51.093287998 +0000 UTC m=+241.445291390" watchObservedRunningTime="2026-02-16 21:26:51.099934082 +0000 UTC m=+241.451937464" Feb 16 21:26:59.848347 master-0 kubenswrapper[38936]: I0216 21:26:59.848261 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:59.848918 master-0 kubenswrapper[38936]: I0216 21:26:59.848365 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:26:59.855462 master-0 kubenswrapper[38936]: I0216 21:26:59.855408 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:27:00.146548 master-0 kubenswrapper[38936]: I0216 21:27:00.146323 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:27:00.235404 master-0 kubenswrapper[38936]: I0216 21:27:00.235321 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5dbf689d64-pgglg"] Feb 16 21:27:20.265611 master-0 kubenswrapper[38936]: I0216 21:27:20.265537 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-q92j7"] Feb 16 21:27:20.268452 master-0 kubenswrapper[38936]: I0216 21:27:20.268405 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q92j7" Feb 16 21:27:20.273559 master-0 kubenswrapper[38936]: I0216 21:27:20.273469 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-fjf29" Feb 16 21:27:20.273938 master-0 kubenswrapper[38936]: I0216 21:27:20.273899 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 21:27:20.329739 master-0 kubenswrapper[38936]: I0216 21:27:20.328227 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/85e47807-309c-4cdb-a687-bfb4d0d72a4a-serviceca\") pod \"node-ca-q92j7\" (UID: \"85e47807-309c-4cdb-a687-bfb4d0d72a4a\") " pod="openshift-image-registry/node-ca-q92j7" Feb 16 21:27:20.329739 master-0 kubenswrapper[38936]: I0216 21:27:20.328334 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85e47807-309c-4cdb-a687-bfb4d0d72a4a-host\") pod \"node-ca-q92j7\" (UID: \"85e47807-309c-4cdb-a687-bfb4d0d72a4a\") " pod="openshift-image-registry/node-ca-q92j7" Feb 16 21:27:20.329739 master-0 kubenswrapper[38936]: I0216 21:27:20.328466 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q686d\" (UniqueName: \"kubernetes.io/projected/85e47807-309c-4cdb-a687-bfb4d0d72a4a-kube-api-access-q686d\") pod \"node-ca-q92j7\" (UID: \"85e47807-309c-4cdb-a687-bfb4d0d72a4a\") " pod="openshift-image-registry/node-ca-q92j7" Feb 16 21:27:20.429451 master-0 kubenswrapper[38936]: I0216 21:27:20.429393 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/85e47807-309c-4cdb-a687-bfb4d0d72a4a-serviceca\") pod \"node-ca-q92j7\" (UID: \"85e47807-309c-4cdb-a687-bfb4d0d72a4a\") " pod="openshift-image-registry/node-ca-q92j7" Feb 16 21:27:20.429741 master-0 kubenswrapper[38936]: I0216 21:27:20.429726 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85e47807-309c-4cdb-a687-bfb4d0d72a4a-host\") pod \"node-ca-q92j7\" (UID: \"85e47807-309c-4cdb-a687-bfb4d0d72a4a\") " pod="openshift-image-registry/node-ca-q92j7" Feb 16 21:27:20.429912 master-0 kubenswrapper[38936]: I0216 21:27:20.429857 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/85e47807-309c-4cdb-a687-bfb4d0d72a4a-host\") pod \"node-ca-q92j7\" (UID: \"85e47807-309c-4cdb-a687-bfb4d0d72a4a\") " pod="openshift-image-registry/node-ca-q92j7" Feb 16 21:27:20.429984 master-0 kubenswrapper[38936]: I0216 21:27:20.429897 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q686d\" (UniqueName: \"kubernetes.io/projected/85e47807-309c-4cdb-a687-bfb4d0d72a4a-kube-api-access-q686d\") pod \"node-ca-q92j7\" (UID: \"85e47807-309c-4cdb-a687-bfb4d0d72a4a\") " pod="openshift-image-registry/node-ca-q92j7" Feb 16 21:27:20.430271 master-0 kubenswrapper[38936]: I0216 21:27:20.430225 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/85e47807-309c-4cdb-a687-bfb4d0d72a4a-serviceca\") pod \"node-ca-q92j7\" (UID: \"85e47807-309c-4cdb-a687-bfb4d0d72a4a\") " pod="openshift-image-registry/node-ca-q92j7" Feb 16 21:27:20.446932 master-0 kubenswrapper[38936]: I0216 21:27:20.446891 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q686d\" (UniqueName: \"kubernetes.io/projected/85e47807-309c-4cdb-a687-bfb4d0d72a4a-kube-api-access-q686d\") pod \"node-ca-q92j7\" (UID: \"85e47807-309c-4cdb-a687-bfb4d0d72a4a\") " pod="openshift-image-registry/node-ca-q92j7" Feb 16 21:27:20.607548 master-0 kubenswrapper[38936]: I0216 21:27:20.607430 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q92j7" Feb 16 21:27:21.330440 master-0 kubenswrapper[38936]: I0216 21:27:21.330327 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q92j7" event={"ID":"85e47807-309c-4cdb-a687-bfb4d0d72a4a","Type":"ContainerStarted","Data":"3aa8e980ae9ce2c05bc0c888824d2be29aaf7043ee79f19acd902fbda55a2771"} Feb 16 21:27:23.348610 master-0 kubenswrapper[38936]: I0216 21:27:23.348524 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q92j7" event={"ID":"85e47807-309c-4cdb-a687-bfb4d0d72a4a","Type":"ContainerStarted","Data":"fafe05d7d5594d3b12e2a6da7a11d2c2a2ee2baab40aadc9eb6b6017d098223b"} Feb 16 21:27:23.368010 master-0 kubenswrapper[38936]: I0216 21:27:23.367893 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-q92j7" podStartSLOduration=1.007440239 podStartE2EDuration="3.367872934s" podCreationTimestamp="2026-02-16 21:27:20 +0000 UTC" firstStartedPulling="2026-02-16 21:27:20.638763035 +0000 UTC m=+270.990766397" lastFinishedPulling="2026-02-16 21:27:22.99919573 +0000 UTC m=+273.351199092" observedRunningTime="2026-02-16 21:27:23.363801924 +0000 UTC m=+273.715805286" watchObservedRunningTime="2026-02-16 21:27:23.367872934 +0000 UTC m=+273.719876296" Feb 16 21:27:25.280304 master-0 kubenswrapper[38936]: I0216 21:27:25.280208 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5dbf689d64-pgglg" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" containerID="cri-o://c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f" gracePeriod=15 Feb 16 21:27:25.758353 master-0 kubenswrapper[38936]: I0216 21:27:25.758309 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5dbf689d64-pgglg_55ec365e-5ef8-4291-9c01-7713bdd6f294/console/0.log" Feb 16 21:27:25.758661 master-0 kubenswrapper[38936]: I0216 21:27:25.758394 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:27:25.823282 master-0 kubenswrapper[38936]: I0216 21:27:25.823228 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-service-ca\") pod \"55ec365e-5ef8-4291-9c01-7713bdd6f294\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " Feb 16 21:27:25.823607 master-0 kubenswrapper[38936]: I0216 21:27:25.823591 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5llvz\" (UniqueName: \"kubernetes.io/projected/55ec365e-5ef8-4291-9c01-7713bdd6f294-kube-api-access-5llvz\") pod \"55ec365e-5ef8-4291-9c01-7713bdd6f294\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " Feb 16 21:27:25.823748 master-0 kubenswrapper[38936]: I0216 21:27:25.823734 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-oauth-config\") pod \"55ec365e-5ef8-4291-9c01-7713bdd6f294\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " Feb 16 21:27:25.824368 master-0 kubenswrapper[38936]: I0216 21:27:25.824064 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-service-ca" (OuterVolumeSpecName: "service-ca") pod "55ec365e-5ef8-4291-9c01-7713bdd6f294" (UID: "55ec365e-5ef8-4291-9c01-7713bdd6f294"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:27:25.824430 master-0 kubenswrapper[38936]: I0216 21:27:25.824321 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-trusted-ca-bundle\") pod \"55ec365e-5ef8-4291-9c01-7713bdd6f294\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " Feb 16 21:27:25.824570 master-0 kubenswrapper[38936]: I0216 21:27:25.824536 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-oauth-serving-cert\") pod \"55ec365e-5ef8-4291-9c01-7713bdd6f294\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " Feb 16 21:27:25.824938 master-0 kubenswrapper[38936]: I0216 21:27:25.824883 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-serving-cert\") pod \"55ec365e-5ef8-4291-9c01-7713bdd6f294\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " Feb 16 21:27:25.825007 master-0 kubenswrapper[38936]: I0216 21:27:25.824960 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-config\") pod \"55ec365e-5ef8-4291-9c01-7713bdd6f294\" (UID: \"55ec365e-5ef8-4291-9c01-7713bdd6f294\") " Feb 16 21:27:25.825182 master-0 kubenswrapper[38936]: I0216 21:27:25.825146 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "55ec365e-5ef8-4291-9c01-7713bdd6f294" (UID: "55ec365e-5ef8-4291-9c01-7713bdd6f294"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:27:25.825471 master-0 kubenswrapper[38936]: I0216 21:27:25.825452 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "55ec365e-5ef8-4291-9c01-7713bdd6f294" (UID: "55ec365e-5ef8-4291-9c01-7713bdd6f294"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:27:25.825578 master-0 kubenswrapper[38936]: I0216 21:27:25.825534 38936 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:25.825686 master-0 kubenswrapper[38936]: I0216 21:27:25.825673 38936 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:25.825772 master-0 kubenswrapper[38936]: I0216 21:27:25.825564 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-config" (OuterVolumeSpecName: "console-config") pod "55ec365e-5ef8-4291-9c01-7713bdd6f294" (UID: "55ec365e-5ef8-4291-9c01-7713bdd6f294"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:27:25.827178 master-0 kubenswrapper[38936]: I0216 21:27:25.827147 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "55ec365e-5ef8-4291-9c01-7713bdd6f294" (UID: "55ec365e-5ef8-4291-9c01-7713bdd6f294"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:27:25.827355 master-0 kubenswrapper[38936]: I0216 21:27:25.827281 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ec365e-5ef8-4291-9c01-7713bdd6f294-kube-api-access-5llvz" (OuterVolumeSpecName: "kube-api-access-5llvz") pod "55ec365e-5ef8-4291-9c01-7713bdd6f294" (UID: "55ec365e-5ef8-4291-9c01-7713bdd6f294"). InnerVolumeSpecName "kube-api-access-5llvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:27:25.827805 master-0 kubenswrapper[38936]: I0216 21:27:25.827766 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "55ec365e-5ef8-4291-9c01-7713bdd6f294" (UID: "55ec365e-5ef8-4291-9c01-7713bdd6f294"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:27:25.928587 master-0 kubenswrapper[38936]: I0216 21:27:25.928495 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5llvz\" (UniqueName: \"kubernetes.io/projected/55ec365e-5ef8-4291-9c01-7713bdd6f294-kube-api-access-5llvz\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:25.928587 master-0 kubenswrapper[38936]: I0216 21:27:25.928567 38936 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:25.928587 master-0 kubenswrapper[38936]: I0216 21:27:25.928587 38936 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:25.928587 master-0 kubenswrapper[38936]: I0216 21:27:25.928607 38936 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:25.929135 master-0 kubenswrapper[38936]: I0216 21:27:25.928629 38936 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/55ec365e-5ef8-4291-9c01-7713bdd6f294-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:26.374513 master-0 kubenswrapper[38936]: I0216 21:27:26.374452 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5dbf689d64-pgglg_55ec365e-5ef8-4291-9c01-7713bdd6f294/console/0.log" Feb 16 21:27:26.376087 master-0 kubenswrapper[38936]: I0216 21:27:26.374525 38936 generic.go:334] "Generic (PLEG): container finished" podID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerID="c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f" exitCode=2 Feb 16 21:27:26.376087 master-0 kubenswrapper[38936]: I0216 21:27:26.374565 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dbf689d64-pgglg" event={"ID":"55ec365e-5ef8-4291-9c01-7713bdd6f294","Type":"ContainerDied","Data":"c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f"} Feb 16 21:27:26.376087 master-0 kubenswrapper[38936]: I0216 21:27:26.374597 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5dbf689d64-pgglg" event={"ID":"55ec365e-5ef8-4291-9c01-7713bdd6f294","Type":"ContainerDied","Data":"23ea2654dab91558abab5c98e19baa003e5825635fff827296e08581fedb6094"} Feb 16 21:27:26.376087 master-0 kubenswrapper[38936]: I0216 21:27:26.374607 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5dbf689d64-pgglg" Feb 16 21:27:26.376087 master-0 kubenswrapper[38936]: I0216 21:27:26.374638 38936 scope.go:117] "RemoveContainer" containerID="c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f" Feb 16 21:27:26.395630 master-0 kubenswrapper[38936]: I0216 21:27:26.395572 38936 scope.go:117] "RemoveContainer" containerID="c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f" Feb 16 21:27:26.396244 master-0 kubenswrapper[38936]: E0216 21:27:26.396199 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f\": container with ID starting with c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f not found: ID does not exist" containerID="c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f" Feb 16 21:27:26.396377 master-0 kubenswrapper[38936]: I0216 21:27:26.396255 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f"} err="failed to get container status \"c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f\": rpc error: code = NotFound desc = could not find container \"c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f\": container with ID starting with c333efe4a65c92970928725775af66d9a74ddd1c665aea5d73198a7cfae1a56f not found: ID does not exist" Feb 16 21:27:26.401963 master-0 kubenswrapper[38936]: I0216 21:27:26.401889 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5dbf689d64-pgglg"] Feb 16 21:27:26.406964 master-0 kubenswrapper[38936]: I0216 21:27:26.406908 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5dbf689d64-pgglg"] Feb 16 21:27:27.884351 master-0 kubenswrapper[38936]: I0216 21:27:27.883956 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" path="/var/lib/kubelet/pods/55ec365e-5ef8-4291-9c01-7713bdd6f294/volumes" Feb 16 21:27:28.187750 master-0 kubenswrapper[38936]: I0216 21:27:28.178235 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67b7649c44-qv4gx"] Feb 16 21:27:28.187750 master-0 kubenswrapper[38936]: E0216 21:27:28.178877 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" Feb 16 21:27:28.187750 master-0 kubenswrapper[38936]: I0216 21:27:28.178891 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" Feb 16 21:27:28.187750 master-0 kubenswrapper[38936]: I0216 21:27:28.179070 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="55ec365e-5ef8-4291-9c01-7713bdd6f294" containerName="console" Feb 16 21:27:28.187750 master-0 kubenswrapper[38936]: I0216 21:27:28.180156 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.214438 master-0 kubenswrapper[38936]: I0216 21:27:28.212277 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67b7649c44-qv4gx"] Feb 16 21:27:28.268233 master-0 kubenswrapper[38936]: I0216 21:27:28.268171 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-oauth-config\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.268444 master-0 kubenswrapper[38936]: I0216 21:27:28.268250 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-trusted-ca-bundle\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.268482 master-0 kubenswrapper[38936]: I0216 21:27:28.268437 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-service-ca\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.268682 master-0 kubenswrapper[38936]: I0216 21:27:28.268601 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-serving-cert\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.268814 master-0 kubenswrapper[38936]: I0216 21:27:28.268777 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-oauth-serving-cert\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.268853 master-0 kubenswrapper[38936]: I0216 21:27:28.268825 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc5kv\" (UniqueName: \"kubernetes.io/projected/4a5c39e0-b7fc-49c3-b662-451027f68ab8-kube-api-access-gc5kv\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.268888 master-0 kubenswrapper[38936]: I0216 21:27:28.268873 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-config\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.370543 master-0 kubenswrapper[38936]: I0216 21:27:28.370460 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-service-ca\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.370543 master-0 kubenswrapper[38936]: I0216 21:27:28.370525 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-serving-cert\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.370912 master-0 kubenswrapper[38936]: I0216 21:27:28.370771 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-oauth-serving-cert\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.370962 master-0 kubenswrapper[38936]: I0216 21:27:28.370913 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc5kv\" (UniqueName: \"kubernetes.io/projected/4a5c39e0-b7fc-49c3-b662-451027f68ab8-kube-api-access-gc5kv\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.371008 master-0 kubenswrapper[38936]: I0216 21:27:28.370976 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-config\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.371167 master-0 kubenswrapper[38936]: I0216 21:27:28.371133 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-oauth-config\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.371307 master-0 kubenswrapper[38936]: I0216 21:27:28.371275 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-trusted-ca-bundle\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.371816 master-0 kubenswrapper[38936]: I0216 21:27:28.371786 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-oauth-serving-cert\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.371925 master-0 kubenswrapper[38936]: I0216 21:27:28.371858 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-service-ca\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.371984 master-0 kubenswrapper[38936]: I0216 21:27:28.371939 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-config\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.373830 master-0 kubenswrapper[38936]: I0216 21:27:28.373694 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-trusted-ca-bundle\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.375898 master-0 kubenswrapper[38936]: I0216 21:27:28.375838 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-oauth-config\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.380525 master-0 kubenswrapper[38936]: I0216 21:27:28.380465 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-serving-cert\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.387686 master-0 kubenswrapper[38936]: I0216 21:27:28.387612 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc5kv\" (UniqueName: \"kubernetes.io/projected/4a5c39e0-b7fc-49c3-b662-451027f68ab8-kube-api-access-gc5kv\") pod \"console-67b7649c44-qv4gx\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.514031 master-0 kubenswrapper[38936]: I0216 21:27:28.513959 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:28.769980 master-0 kubenswrapper[38936]: W0216 21:27:28.768317 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a5c39e0_b7fc_49c3_b662_451027f68ab8.slice/crio-e9a3370419775c754ea2ac9716b520ed89a4438e3ca569cb6ef90e5e185628c5 WatchSource:0}: Error finding container e9a3370419775c754ea2ac9716b520ed89a4438e3ca569cb6ef90e5e185628c5: Status 404 returned error can't find the container with id e9a3370419775c754ea2ac9716b520ed89a4438e3ca569cb6ef90e5e185628c5 Feb 16 21:27:28.772322 master-0 kubenswrapper[38936]: I0216 21:27:28.772224 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67b7649c44-qv4gx"] Feb 16 21:27:29.399009 master-0 kubenswrapper[38936]: I0216 21:27:29.398956 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67b7649c44-qv4gx" event={"ID":"4a5c39e0-b7fc-49c3-b662-451027f68ab8","Type":"ContainerStarted","Data":"cdbc65c1c9d28a230556f90a8929e6e290f5eba7a4cf63fbc814e7b3fc06b31a"} Feb 16 21:27:29.399009 master-0 kubenswrapper[38936]: I0216 21:27:29.399014 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67b7649c44-qv4gx" event={"ID":"4a5c39e0-b7fc-49c3-b662-451027f68ab8","Type":"ContainerStarted","Data":"e9a3370419775c754ea2ac9716b520ed89a4438e3ca569cb6ef90e5e185628c5"} Feb 16 21:27:29.428786 master-0 kubenswrapper[38936]: I0216 21:27:29.428667 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67b7649c44-qv4gx" podStartSLOduration=1.428587977 podStartE2EDuration="1.428587977s" podCreationTimestamp="2026-02-16 21:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:27:29.419245305 +0000 UTC m=+279.771248667" watchObservedRunningTime="2026-02-16 21:27:29.428587977 +0000 UTC m=+279.780591369" Feb 16 21:27:38.514816 master-0 kubenswrapper[38936]: I0216 21:27:38.514758 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:38.515524 master-0 kubenswrapper[38936]: I0216 21:27:38.515505 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:38.524131 master-0 kubenswrapper[38936]: I0216 21:27:38.524078 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:39.470673 master-0 kubenswrapper[38936]: I0216 21:27:39.470572 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:27:39.549480 master-0 kubenswrapper[38936]: I0216 21:27:39.549418 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75f89cd5b8-wc2s4"] Feb 16 21:27:41.255357 master-0 kubenswrapper[38936]: I0216 21:27:41.255300 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:27:41.285405 master-0 kubenswrapper[38936]: I0216 21:27:41.285348 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4a9f4f96-ca31-4959-93fe-c094caf8e077-audit-log\") pod \"4a9f4f96-ca31-4959-93fe-c094caf8e077\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " Feb 16 21:27:41.285728 master-0 kubenswrapper[38936]: I0216 21:27:41.285707 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle\") pod \"4a9f4f96-ca31-4959-93fe-c094caf8e077\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " Feb 16 21:27:41.285879 master-0 kubenswrapper[38936]: I0216 21:27:41.285829 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a9f4f96-ca31-4959-93fe-c094caf8e077-audit-log" (OuterVolumeSpecName: "audit-log") pod "4a9f4f96-ca31-4959-93fe-c094caf8e077" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:27:41.286023 master-0 kubenswrapper[38936]: I0216 21:27:41.286003 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs\") pod \"4a9f4f96-ca31-4959-93fe-c094caf8e077\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " Feb 16 21:27:41.286261 master-0 kubenswrapper[38936]: I0216 21:27:41.286240 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles\") pod \"4a9f4f96-ca31-4959-93fe-c094caf8e077\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " Feb 16 21:27:41.286435 master-0 kubenswrapper[38936]: I0216 21:27:41.286415 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrc4z\" (UniqueName: \"kubernetes.io/projected/4a9f4f96-ca31-4959-93fe-c094caf8e077-kube-api-access-xrc4z\") pod \"4a9f4f96-ca31-4959-93fe-c094caf8e077\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " Feb 16 21:27:41.286619 master-0 kubenswrapper[38936]: I0216 21:27:41.286601 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls\") pod \"4a9f4f96-ca31-4959-93fe-c094caf8e077\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " Feb 16 21:27:41.286818 master-0 kubenswrapper[38936]: I0216 21:27:41.286799 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle\") pod \"4a9f4f96-ca31-4959-93fe-c094caf8e077\" (UID: \"4a9f4f96-ca31-4959-93fe-c094caf8e077\") " Feb 16 21:27:41.286958 master-0 kubenswrapper[38936]: I0216 21:27:41.286901 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "4a9f4f96-ca31-4959-93fe-c094caf8e077" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:27:41.287406 master-0 kubenswrapper[38936]: I0216 21:27:41.287380 38936 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/4a9f4f96-ca31-4959-93fe-c094caf8e077-audit-log\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:41.287543 master-0 kubenswrapper[38936]: I0216 21:27:41.287525 38936 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:41.287670 master-0 kubenswrapper[38936]: I0216 21:27:41.287366 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "4a9f4f96-ca31-4959-93fe-c094caf8e077" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:27:41.328356 master-0 kubenswrapper[38936]: I0216 21:27:41.328289 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "4a9f4f96-ca31-4959-93fe-c094caf8e077" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:27:41.329070 master-0 kubenswrapper[38936]: I0216 21:27:41.328883 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "4a9f4f96-ca31-4959-93fe-c094caf8e077" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:27:41.329070 master-0 kubenswrapper[38936]: I0216 21:27:41.328970 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "4a9f4f96-ca31-4959-93fe-c094caf8e077" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:27:41.329316 master-0 kubenswrapper[38936]: I0216 21:27:41.329255 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9f4f96-ca31-4959-93fe-c094caf8e077-kube-api-access-xrc4z" (OuterVolumeSpecName: "kube-api-access-xrc4z") pod "4a9f4f96-ca31-4959-93fe-c094caf8e077" (UID: "4a9f4f96-ca31-4959-93fe-c094caf8e077"). InnerVolumeSpecName "kube-api-access-xrc4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:27:41.388599 master-0 kubenswrapper[38936]: I0216 21:27:41.388536 38936 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:41.388599 master-0 kubenswrapper[38936]: I0216 21:27:41.388579 38936 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:41.388599 master-0 kubenswrapper[38936]: I0216 21:27:41.388591 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrc4z\" (UniqueName: \"kubernetes.io/projected/4a9f4f96-ca31-4959-93fe-c094caf8e077-kube-api-access-xrc4z\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:41.388599 master-0 kubenswrapper[38936]: I0216 21:27:41.388602 38936 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/4a9f4f96-ca31-4959-93fe-c094caf8e077-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:41.388599 master-0 kubenswrapper[38936]: I0216 21:27:41.388612 38936 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a9f4f96-ca31-4959-93fe-c094caf8e077-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:27:41.483775 master-0 kubenswrapper[38936]: I0216 21:27:41.483628 38936 generic.go:334] "Generic (PLEG): container finished" podID="4a9f4f96-ca31-4959-93fe-c094caf8e077" containerID="717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7" exitCode=0 Feb 16 21:27:41.483775 master-0 kubenswrapper[38936]: I0216 21:27:41.483703 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" event={"ID":"4a9f4f96-ca31-4959-93fe-c094caf8e077","Type":"ContainerDied","Data":"717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7"} Feb 16 21:27:41.484221 master-0 kubenswrapper[38936]: I0216 21:27:41.483737 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" Feb 16 21:27:41.484221 master-0 kubenswrapper[38936]: I0216 21:27:41.483773 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76c9c896c-pz2bk" event={"ID":"4a9f4f96-ca31-4959-93fe-c094caf8e077","Type":"ContainerDied","Data":"b4ab6f7d6521695677ac09385923bea0cfde2c320361c5f6cbe98ce64b7475b2"} Feb 16 21:27:41.484221 master-0 kubenswrapper[38936]: I0216 21:27:41.483791 38936 scope.go:117] "RemoveContainer" containerID="717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7" Feb 16 21:27:41.502971 master-0 kubenswrapper[38936]: I0216 21:27:41.502875 38936 scope.go:117] "RemoveContainer" containerID="717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7" Feb 16 21:27:41.504354 master-0 kubenswrapper[38936]: E0216 21:27:41.504272 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7\": container with ID starting with 717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7 not found: ID does not exist" containerID="717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7" Feb 16 21:27:41.504533 master-0 kubenswrapper[38936]: I0216 21:27:41.504348 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7"} err="failed to get container status \"717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7\": rpc error: code = NotFound desc = could not find container \"717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7\": container with ID starting with 717811e555354f498448a1f9bf3201dfc3fcf0b7778c716a1769b62e1e6022c7 not found: ID does not exist" Feb 16 21:27:41.535888 master-0 kubenswrapper[38936]: I0216 21:27:41.535813 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-76c9c896c-pz2bk"] Feb 16 21:27:41.541305 master-0 kubenswrapper[38936]: I0216 21:27:41.541248 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-76c9c896c-pz2bk"] Feb 16 21:27:41.886346 master-0 kubenswrapper[38936]: I0216 21:27:41.886284 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9f4f96-ca31-4959-93fe-c094caf8e077" path="/var/lib/kubelet/pods/4a9f4f96-ca31-4959-93fe-c094caf8e077/volumes" Feb 16 21:27:49.837850 master-0 kubenswrapper[38936]: I0216 21:27:49.837764 38936 kubelet.go:1505] "Image garbage collection succeeded" Feb 16 21:28:04.623387 master-0 kubenswrapper[38936]: I0216 21:28:04.623319 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-75f89cd5b8-wc2s4" podUID="e94f9961-bf52-463f-8143-2ec1caa6cdf1" containerName="console" containerID="cri-o://aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b" gracePeriod=15 Feb 16 21:28:05.191362 master-0 kubenswrapper[38936]: I0216 21:28:05.191301 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75f89cd5b8-wc2s4_e94f9961-bf52-463f-8143-2ec1caa6cdf1/console/0.log" Feb 16 21:28:05.191590 master-0 kubenswrapper[38936]: I0216 21:28:05.191395 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:28:05.354219 master-0 kubenswrapper[38936]: I0216 21:28:05.354089 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v87r\" (UniqueName: \"kubernetes.io/projected/e94f9961-bf52-463f-8143-2ec1caa6cdf1-kube-api-access-2v87r\") pod \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " Feb 16 21:28:05.354464 master-0 kubenswrapper[38936]: I0216 21:28:05.354258 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-serving-cert\") pod \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " Feb 16 21:28:05.354464 master-0 kubenswrapper[38936]: I0216 21:28:05.354293 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-trusted-ca-bundle\") pod \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " Feb 16 21:28:05.354464 master-0 kubenswrapper[38936]: I0216 21:28:05.354430 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-oauth-serving-cert\") pod \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " Feb 16 21:28:05.354591 master-0 kubenswrapper[38936]: I0216 21:28:05.354558 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-oauth-config\") pod \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " Feb 16 21:28:05.354591 master-0 kubenswrapper[38936]: I0216 21:28:05.354581 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-service-ca\") pod \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " Feb 16 21:28:05.355040 master-0 kubenswrapper[38936]: I0216 21:28:05.355011 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-config\") pod \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\" (UID: \"e94f9961-bf52-463f-8143-2ec1caa6cdf1\") " Feb 16 21:28:05.355411 master-0 kubenswrapper[38936]: I0216 21:28:05.355352 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e94f9961-bf52-463f-8143-2ec1caa6cdf1" (UID: "e94f9961-bf52-463f-8143-2ec1caa6cdf1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:28:05.355495 master-0 kubenswrapper[38936]: I0216 21:28:05.355442 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e94f9961-bf52-463f-8143-2ec1caa6cdf1" (UID: "e94f9961-bf52-463f-8143-2ec1caa6cdf1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:28:05.355495 master-0 kubenswrapper[38936]: I0216 21:28:05.355484 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-service-ca" (OuterVolumeSpecName: "service-ca") pod "e94f9961-bf52-463f-8143-2ec1caa6cdf1" (UID: "e94f9961-bf52-463f-8143-2ec1caa6cdf1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:28:05.355601 master-0 kubenswrapper[38936]: I0216 21:28:05.355456 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-config" (OuterVolumeSpecName: "console-config") pod "e94f9961-bf52-463f-8143-2ec1caa6cdf1" (UID: "e94f9961-bf52-463f-8143-2ec1caa6cdf1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:28:05.358578 master-0 kubenswrapper[38936]: I0216 21:28:05.357865 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e94f9961-bf52-463f-8143-2ec1caa6cdf1" (UID: "e94f9961-bf52-463f-8143-2ec1caa6cdf1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:28:05.358578 master-0 kubenswrapper[38936]: I0216 21:28:05.358247 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e94f9961-bf52-463f-8143-2ec1caa6cdf1-kube-api-access-2v87r" (OuterVolumeSpecName: "kube-api-access-2v87r") pod "e94f9961-bf52-463f-8143-2ec1caa6cdf1" (UID: "e94f9961-bf52-463f-8143-2ec1caa6cdf1"). InnerVolumeSpecName "kube-api-access-2v87r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:28:05.358578 master-0 kubenswrapper[38936]: I0216 21:28:05.358242 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e94f9961-bf52-463f-8143-2ec1caa6cdf1" (UID: "e94f9961-bf52-463f-8143-2ec1caa6cdf1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:28:05.457527 master-0 kubenswrapper[38936]: I0216 21:28:05.457394 38936 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:28:05.457527 master-0 kubenswrapper[38936]: I0216 21:28:05.457462 38936 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:28:05.457527 master-0 kubenswrapper[38936]: I0216 21:28:05.457484 38936 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 21:28:05.457527 master-0 kubenswrapper[38936]: I0216 21:28:05.457502 38936 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:28:05.457527 master-0 kubenswrapper[38936]: I0216 21:28:05.457520 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v87r\" (UniqueName: \"kubernetes.io/projected/e94f9961-bf52-463f-8143-2ec1caa6cdf1-kube-api-access-2v87r\") on node \"master-0\" DevicePath \"\"" Feb 16 21:28:05.457829 master-0 kubenswrapper[38936]: I0216 21:28:05.457540 38936 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e94f9961-bf52-463f-8143-2ec1caa6cdf1-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:28:05.457829 master-0 kubenswrapper[38936]: I0216 21:28:05.457558 38936 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e94f9961-bf52-463f-8143-2ec1caa6cdf1-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:28:05.682290 master-0 kubenswrapper[38936]: I0216 21:28:05.682238 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75f89cd5b8-wc2s4_e94f9961-bf52-463f-8143-2ec1caa6cdf1/console/0.log" Feb 16 21:28:05.682875 master-0 kubenswrapper[38936]: I0216 21:28:05.682317 38936 generic.go:334] "Generic (PLEG): container finished" podID="e94f9961-bf52-463f-8143-2ec1caa6cdf1" containerID="aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b" exitCode=2 Feb 16 21:28:05.682875 master-0 kubenswrapper[38936]: I0216 21:28:05.682355 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75f89cd5b8-wc2s4" event={"ID":"e94f9961-bf52-463f-8143-2ec1caa6cdf1","Type":"ContainerDied","Data":"aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b"} Feb 16 21:28:05.682875 master-0 kubenswrapper[38936]: I0216 21:28:05.682389 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75f89cd5b8-wc2s4" event={"ID":"e94f9961-bf52-463f-8143-2ec1caa6cdf1","Type":"ContainerDied","Data":"154eb1a2522caebf89fc1b6cfb7671c9513139e6d120d3bcef24f161da88b5cf"} Feb 16 21:28:05.682875 master-0 kubenswrapper[38936]: I0216 21:28:05.682413 38936 scope.go:117] "RemoveContainer" containerID="aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b" Feb 16 21:28:05.682875 master-0 kubenswrapper[38936]: I0216 21:28:05.682435 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75f89cd5b8-wc2s4" Feb 16 21:28:05.707702 master-0 kubenswrapper[38936]: I0216 21:28:05.707604 38936 scope.go:117] "RemoveContainer" containerID="aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b" Feb 16 21:28:05.708060 master-0 kubenswrapper[38936]: E0216 21:28:05.708012 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b\": container with ID starting with aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b not found: ID does not exist" containerID="aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b" Feb 16 21:28:05.708135 master-0 kubenswrapper[38936]: I0216 21:28:05.708065 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b"} err="failed to get container status \"aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b\": rpc error: code = NotFound desc = could not find container \"aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b\": container with ID starting with aa78c04a3cb02906e07f4491dd9fb77e4b5367e1ce931548974f46de3862a11b not found: ID does not exist" Feb 16 21:28:05.751161 master-0 kubenswrapper[38936]: I0216 21:28:05.751111 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75f89cd5b8-wc2s4"] Feb 16 21:28:05.771752 master-0 kubenswrapper[38936]: I0216 21:28:05.768600 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-75f89cd5b8-wc2s4"] Feb 16 21:28:05.887941 master-0 kubenswrapper[38936]: I0216 21:28:05.887862 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e94f9961-bf52-463f-8143-2ec1caa6cdf1" path="/var/lib/kubelet/pods/e94f9961-bf52-463f-8143-2ec1caa6cdf1/volumes" Feb 16 21:28:42.972705 master-0 kubenswrapper[38936]: I0216 21:28:42.972590 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 16 21:28:42.973534 master-0 kubenswrapper[38936]: E0216 21:28:42.973268 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9f4f96-ca31-4959-93fe-c094caf8e077" containerName="metrics-server" Feb 16 21:28:42.973534 master-0 kubenswrapper[38936]: I0216 21:28:42.973293 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9f4f96-ca31-4959-93fe-c094caf8e077" containerName="metrics-server" Feb 16 21:28:42.973534 master-0 kubenswrapper[38936]: E0216 21:28:42.973310 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e94f9961-bf52-463f-8143-2ec1caa6cdf1" containerName="console" Feb 16 21:28:42.973534 master-0 kubenswrapper[38936]: I0216 21:28:42.973322 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94f9961-bf52-463f-8143-2ec1caa6cdf1" containerName="console" Feb 16 21:28:42.973949 master-0 kubenswrapper[38936]: I0216 21:28:42.973903 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="e94f9961-bf52-463f-8143-2ec1caa6cdf1" containerName="console" Feb 16 21:28:42.974006 master-0 kubenswrapper[38936]: I0216 21:28:42.973975 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9f4f96-ca31-4959-93fe-c094caf8e077" containerName="metrics-server" Feb 16 21:28:42.974870 master-0 kubenswrapper[38936]: I0216 21:28:42.974832 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:28:42.977586 master-0 kubenswrapper[38936]: I0216 21:28:42.977523 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-czn7h" Feb 16 21:28:42.977955 master-0 kubenswrapper[38936]: I0216 21:28:42.977824 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 21:28:42.993946 master-0 kubenswrapper[38936]: I0216 21:28:42.993863 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 16 21:28:43.038788 master-0 kubenswrapper[38936]: I0216 21:28:43.038707 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94fca51c-425f-436b-b260-123e8baca2c0-kube-api-access\") pod \"installer-4-master-0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:28:43.038788 master-0 kubenswrapper[38936]: I0216 21:28:43.038772 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:28:43.039025 master-0 kubenswrapper[38936]: I0216 21:28:43.038943 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-var-lock\") pod \"installer-4-master-0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:28:43.140700 master-0 kubenswrapper[38936]: I0216 21:28:43.140594 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94fca51c-425f-436b-b260-123e8baca2c0-kube-api-access\") pod \"installer-4-master-0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:28:43.140968 master-0 kubenswrapper[38936]: I0216 21:28:43.140919 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:28:43.141062 master-0 kubenswrapper[38936]: I0216 21:28:43.141026 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:28:43.141386 master-0 kubenswrapper[38936]: I0216 21:28:43.141348 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-var-lock\") pod \"installer-4-master-0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:28:43.141550 master-0 kubenswrapper[38936]: I0216 21:28:43.141483 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-var-lock\") pod \"installer-4-master-0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:28:43.166680 master-0 kubenswrapper[38936]: I0216 21:28:43.166602 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94fca51c-425f-436b-b260-123e8baca2c0-kube-api-access\") pod \"installer-4-master-0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:28:43.300593 master-0 kubenswrapper[38936]: I0216 21:28:43.300494 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:28:43.755252 master-0 kubenswrapper[38936]: I0216 21:28:43.755184 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 16 21:28:43.761428 master-0 kubenswrapper[38936]: W0216 21:28:43.760755 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod94fca51c_425f_436b_b260_123e8baca2c0.slice/crio-fb4aa87c55da02a74a2341be0b832d568f78f04de2d2bb2d220f7257eaa6a873 WatchSource:0}: Error finding container fb4aa87c55da02a74a2341be0b832d568f78f04de2d2bb2d220f7257eaa6a873: Status 404 returned error can't find the container with id fb4aa87c55da02a74a2341be0b832d568f78f04de2d2bb2d220f7257eaa6a873 Feb 16 21:28:44.073072 master-0 kubenswrapper[38936]: I0216 21:28:44.072980 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"94fca51c-425f-436b-b260-123e8baca2c0","Type":"ContainerStarted","Data":"fb4aa87c55da02a74a2341be0b832d568f78f04de2d2bb2d220f7257eaa6a873"} Feb 16 21:28:45.102917 master-0 kubenswrapper[38936]: I0216 21:28:45.102829 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"94fca51c-425f-436b-b260-123e8baca2c0","Type":"ContainerStarted","Data":"b4a44b3a50542fbf463ed37fa6c5a6567124a3f9103a819cf1b133fa1c735bc0"} Feb 16 21:28:45.130482 master-0 kubenswrapper[38936]: I0216 21:28:45.130329 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=3.130296398 podStartE2EDuration="3.130296398s" podCreationTimestamp="2026-02-16 21:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:28:45.123768333 +0000 UTC m=+355.475771705" watchObservedRunningTime="2026-02-16 21:28:45.130296398 +0000 UTC m=+355.482299810" Feb 16 21:28:45.380548 master-0 kubenswrapper[38936]: I0216 21:28:45.380313 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-8c88f"] Feb 16 21:28:45.383047 master-0 kubenswrapper[38936]: I0216 21:28:45.382969 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:45.388493 master-0 kubenswrapper[38936]: I0216 21:28:45.388414 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-8c88f"] Feb 16 21:28:45.426530 master-0 kubenswrapper[38936]: I0216 21:28:45.426445 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Feb 16 21:28:45.426530 master-0 kubenswrapper[38936]: I0216 21:28:45.426483 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 16 21:28:45.426986 master-0 kubenswrapper[38936]: I0216 21:28:45.426930 38936 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Feb 16 21:28:45.427240 master-0 kubenswrapper[38936]: I0216 21:28:45.427184 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Feb 16 21:28:45.525431 master-0 kubenswrapper[38936]: I0216 21:28:45.525359 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/ee0e3566-8d48-46f0-8f11-d044fecd942a-os-client-config\") pod \"sushy-emulator-58f4c9b998-8c88f\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:45.525637 master-0 kubenswrapper[38936]: I0216 21:28:45.525477 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/ee0e3566-8d48-46f0-8f11-d044fecd942a-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-8c88f\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:45.525637 master-0 kubenswrapper[38936]: I0216 21:28:45.525513 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmr94\" (UniqueName: \"kubernetes.io/projected/ee0e3566-8d48-46f0-8f11-d044fecd942a-kube-api-access-xmr94\") pod \"sushy-emulator-58f4c9b998-8c88f\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:45.626960 master-0 kubenswrapper[38936]: I0216 21:28:45.626861 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/ee0e3566-8d48-46f0-8f11-d044fecd942a-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-8c88f\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:45.626960 master-0 kubenswrapper[38936]: I0216 21:28:45.626945 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmr94\" (UniqueName: \"kubernetes.io/projected/ee0e3566-8d48-46f0-8f11-d044fecd942a-kube-api-access-xmr94\") pod \"sushy-emulator-58f4c9b998-8c88f\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:45.627316 master-0 kubenswrapper[38936]: I0216 21:28:45.627015 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/ee0e3566-8d48-46f0-8f11-d044fecd942a-os-client-config\") pod \"sushy-emulator-58f4c9b998-8c88f\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:45.628567 master-0 kubenswrapper[38936]: I0216 21:28:45.628516 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/ee0e3566-8d48-46f0-8f11-d044fecd942a-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-8c88f\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:45.630973 master-0 kubenswrapper[38936]: I0216 21:28:45.630882 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/ee0e3566-8d48-46f0-8f11-d044fecd942a-os-client-config\") pod \"sushy-emulator-58f4c9b998-8c88f\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:45.642425 master-0 kubenswrapper[38936]: I0216 21:28:45.642383 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmr94\" (UniqueName: \"kubernetes.io/projected/ee0e3566-8d48-46f0-8f11-d044fecd942a-kube-api-access-xmr94\") pod \"sushy-emulator-58f4c9b998-8c88f\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:45.781720 master-0 kubenswrapper[38936]: I0216 21:28:45.781647 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:46.255018 master-0 kubenswrapper[38936]: I0216 21:28:46.254968 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-8c88f"] Feb 16 21:28:46.261028 master-0 kubenswrapper[38936]: W0216 21:28:46.260966 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee0e3566_8d48_46f0_8f11_d044fecd942a.slice/crio-fb1c5b2b4d80fd53196369351d9844acdbc5f400c1ef11b5a8e9ac112ce7d435 WatchSource:0}: Error finding container fb1c5b2b4d80fd53196369351d9844acdbc5f400c1ef11b5a8e9ac112ce7d435: Status 404 returned error can't find the container with id fb1c5b2b4d80fd53196369351d9844acdbc5f400c1ef11b5a8e9ac112ce7d435 Feb 16 21:28:46.262901 master-0 kubenswrapper[38936]: I0216 21:28:46.262851 38936 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:28:47.120384 master-0 kubenswrapper[38936]: I0216 21:28:47.120322 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" event={"ID":"ee0e3566-8d48-46f0-8f11-d044fecd942a","Type":"ContainerStarted","Data":"fb1c5b2b4d80fd53196369351d9844acdbc5f400c1ef11b5a8e9ac112ce7d435"} Feb 16 21:28:50.227191 master-0 kubenswrapper[38936]: I0216 21:28:50.227123 38936 scope.go:117] "RemoveContainer" containerID="3f86128dc7a80bf0962766ba7f7979e170ef26e4e83c8289ef27c44072e56335" Feb 16 21:28:54.198277 master-0 kubenswrapper[38936]: I0216 21:28:54.198175 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" event={"ID":"ee0e3566-8d48-46f0-8f11-d044fecd942a","Type":"ContainerStarted","Data":"0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741"} Feb 16 21:28:54.278074 master-0 kubenswrapper[38936]: I0216 21:28:54.277924 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" podStartSLOduration=2.099499823 podStartE2EDuration="9.277895371s" podCreationTimestamp="2026-02-16 21:28:45 +0000 UTC" firstStartedPulling="2026-02-16 21:28:46.262801344 +0000 UTC m=+356.614804716" lastFinishedPulling="2026-02-16 21:28:53.441196902 +0000 UTC m=+363.793200264" observedRunningTime="2026-02-16 21:28:54.26749741 +0000 UTC m=+364.619500802" watchObservedRunningTime="2026-02-16 21:28:54.277895371 +0000 UTC m=+364.629898773" Feb 16 21:28:55.782115 master-0 kubenswrapper[38936]: I0216 21:28:55.782016 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:55.782115 master-0 kubenswrapper[38936]: I0216 21:28:55.782124 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:55.805237 master-0 kubenswrapper[38936]: I0216 21:28:55.805181 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:56.220551 master-0 kubenswrapper[38936]: I0216 21:28:56.220376 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:28:58.170281 master-0 kubenswrapper[38936]: I0216 21:28:58.170161 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2"] Feb 16 21:28:58.173807 master-0 kubenswrapper[38936]: I0216 21:28:58.172189 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" Feb 16 21:28:58.185259 master-0 kubenswrapper[38936]: I0216 21:28:58.185159 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2"] Feb 16 21:28:58.248356 master-0 kubenswrapper[38936]: I0216 21:28:58.248293 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bll68\" (UniqueName: \"kubernetes.io/projected/d56f2e06-156b-4484-85c2-05608fe285fd-kube-api-access-bll68\") pod \"nova-console-poller-5f88dd4d5f-tvcx2\" (UID: \"d56f2e06-156b-4484-85c2-05608fe285fd\") " pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" Feb 16 21:28:58.248612 master-0 kubenswrapper[38936]: I0216 21:28:58.248412 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d56f2e06-156b-4484-85c2-05608fe285fd-os-client-config\") pod \"nova-console-poller-5f88dd4d5f-tvcx2\" (UID: \"d56f2e06-156b-4484-85c2-05608fe285fd\") " pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" Feb 16 21:28:58.352379 master-0 kubenswrapper[38936]: I0216 21:28:58.351019 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d56f2e06-156b-4484-85c2-05608fe285fd-os-client-config\") pod \"nova-console-poller-5f88dd4d5f-tvcx2\" (UID: \"d56f2e06-156b-4484-85c2-05608fe285fd\") " pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" Feb 16 21:28:58.352379 master-0 kubenswrapper[38936]: I0216 21:28:58.351463 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bll68\" (UniqueName: \"kubernetes.io/projected/d56f2e06-156b-4484-85c2-05608fe285fd-kube-api-access-bll68\") pod \"nova-console-poller-5f88dd4d5f-tvcx2\" (UID: \"d56f2e06-156b-4484-85c2-05608fe285fd\") " pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" Feb 16 21:28:58.354439 master-0 kubenswrapper[38936]: I0216 21:28:58.354406 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d56f2e06-156b-4484-85c2-05608fe285fd-os-client-config\") pod \"nova-console-poller-5f88dd4d5f-tvcx2\" (UID: \"d56f2e06-156b-4484-85c2-05608fe285fd\") " pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" Feb 16 21:28:58.372098 master-0 kubenswrapper[38936]: I0216 21:28:58.372026 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bll68\" (UniqueName: \"kubernetes.io/projected/d56f2e06-156b-4484-85c2-05608fe285fd-kube-api-access-bll68\") pod \"nova-console-poller-5f88dd4d5f-tvcx2\" (UID: \"d56f2e06-156b-4484-85c2-05608fe285fd\") " pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" Feb 16 21:28:58.510885 master-0 kubenswrapper[38936]: I0216 21:28:58.510802 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" Feb 16 21:28:58.976748 master-0 kubenswrapper[38936]: I0216 21:28:58.976680 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2"] Feb 16 21:28:58.977429 master-0 kubenswrapper[38936]: W0216 21:28:58.977380 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd56f2e06_156b_4484_85c2_05608fe285fd.slice/crio-f4943aa1f13aa4e8ec715f897661d30128a9d1d5d7d8beee89e378467b9efcec WatchSource:0}: Error finding container f4943aa1f13aa4e8ec715f897661d30128a9d1d5d7d8beee89e378467b9efcec: Status 404 returned error can't find the container with id f4943aa1f13aa4e8ec715f897661d30128a9d1d5d7d8beee89e378467b9efcec Feb 16 21:28:59.243344 master-0 kubenswrapper[38936]: I0216 21:28:59.243170 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" event={"ID":"d56f2e06-156b-4484-85c2-05608fe285fd","Type":"ContainerStarted","Data":"f4943aa1f13aa4e8ec715f897661d30128a9d1d5d7d8beee89e378467b9efcec"} Feb 16 21:29:05.294155 master-0 kubenswrapper[38936]: I0216 21:29:05.294068 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" event={"ID":"d56f2e06-156b-4484-85c2-05608fe285fd","Type":"ContainerStarted","Data":"208b921f1f2dc88f4d22d45745e9ced7ff8c12ab9ddc5496d88591ab5343164a"} Feb 16 21:29:05.294155 master-0 kubenswrapper[38936]: I0216 21:29:05.294146 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" event={"ID":"d56f2e06-156b-4484-85c2-05608fe285fd","Type":"ContainerStarted","Data":"fbce8348db9b162e4de77a61a8542aae6c342f7d2836900f5b683e36352a1737"} Feb 16 21:29:05.321094 master-0 kubenswrapper[38936]: I0216 21:29:05.320969 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2" podStartSLOduration=1.598615627 podStartE2EDuration="7.320937365s" podCreationTimestamp="2026-02-16 21:28:58 +0000 UTC" firstStartedPulling="2026-02-16 21:28:58.979860502 +0000 UTC m=+369.331863864" lastFinishedPulling="2026-02-16 21:29:04.70218225 +0000 UTC m=+375.054185602" observedRunningTime="2026-02-16 21:29:05.317978125 +0000 UTC m=+375.669981527" watchObservedRunningTime="2026-02-16 21:29:05.320937365 +0000 UTC m=+375.672940767" Feb 16 21:29:17.315878 master-0 kubenswrapper[38936]: E0216 21:29:17.315789 38936 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-controller-manager-pod.yaml\": /etc/kubernetes/manifests/kube-controller-manager-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Feb 16 21:29:17.317129 master-0 kubenswrapper[38936]: I0216 21:29:17.316054 38936 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:29:17.317376 master-0 kubenswrapper[38936]: I0216 21:29:17.317297 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="cluster-policy-controller" containerID="cri-o://fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c" gracePeriod=30 Feb 16 21:29:17.317614 master-0 kubenswrapper[38936]: I0216 21:29:17.317581 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager" containerID="cri-o://9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460" gracePeriod=30 Feb 16 21:29:17.317881 master-0 kubenswrapper[38936]: I0216 21:29:17.317710 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241" gracePeriod=30 Feb 16 21:29:17.317881 master-0 kubenswrapper[38936]: I0216 21:29:17.317782 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05" gracePeriod=30 Feb 16 21:29:17.320448 master-0 kubenswrapper[38936]: I0216 21:29:17.320381 38936 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:29:17.320883 master-0 kubenswrapper[38936]: E0216 21:29:17.320821 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager" Feb 16 21:29:17.320883 master-0 kubenswrapper[38936]: I0216 21:29:17.320847 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager" Feb 16 21:29:17.320883 master-0 kubenswrapper[38936]: E0216 21:29:17.320882 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager-recovery-controller" Feb 16 21:29:17.320883 master-0 kubenswrapper[38936]: I0216 21:29:17.320894 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager-recovery-controller" Feb 16 21:29:17.321406 master-0 kubenswrapper[38936]: E0216 21:29:17.320921 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager-cert-syncer" Feb 16 21:29:17.321406 master-0 kubenswrapper[38936]: I0216 21:29:17.320932 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager-cert-syncer" Feb 16 21:29:17.321406 master-0 kubenswrapper[38936]: E0216 21:29:17.320960 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="cluster-policy-controller" Feb 16 21:29:17.321406 master-0 kubenswrapper[38936]: I0216 21:29:17.320968 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="cluster-policy-controller" Feb 16 21:29:17.321406 master-0 kubenswrapper[38936]: I0216 21:29:17.321126 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager" Feb 16 21:29:17.321406 master-0 kubenswrapper[38936]: I0216 21:29:17.321188 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="cluster-policy-controller" Feb 16 21:29:17.321406 master-0 kubenswrapper[38936]: I0216 21:29:17.321208 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager-cert-syncer" Feb 16 21:29:17.321406 master-0 kubenswrapper[38936]: I0216 21:29:17.321222 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager-recovery-controller" Feb 16 21:29:17.321406 master-0 kubenswrapper[38936]: E0216 21:29:17.321406 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager" Feb 16 21:29:17.321406 master-0 kubenswrapper[38936]: I0216 21:29:17.321419 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager" Feb 16 21:29:17.322427 master-0 kubenswrapper[38936]: I0216 21:29:17.321607 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc19ea17c4f595b135412c661d90b9a7" containerName="kube-controller-manager" Feb 16 21:29:17.469449 master-0 kubenswrapper[38936]: I0216 21:29:17.469374 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9ba4aeba55e35991fa1dbf1a458f10eb-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9ba4aeba55e35991fa1dbf1a458f10eb\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:17.469911 master-0 kubenswrapper[38936]: I0216 21:29:17.469820 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9ba4aeba55e35991fa1dbf1a458f10eb-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9ba4aeba55e35991fa1dbf1a458f10eb\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:17.510819 master-0 kubenswrapper[38936]: I0216 21:29:17.510775 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_fc19ea17c4f595b135412c661d90b9a7/kube-controller-manager-cert-syncer/0.log" Feb 16 21:29:17.512311 master-0 kubenswrapper[38936]: I0216 21:29:17.512278 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_fc19ea17c4f595b135412c661d90b9a7/kube-controller-manager/0.log" Feb 16 21:29:17.512405 master-0 kubenswrapper[38936]: I0216 21:29:17.512382 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:17.516757 master-0 kubenswrapper[38936]: I0216 21:29:17.516714 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="fc19ea17c4f595b135412c661d90b9a7" podUID="9ba4aeba55e35991fa1dbf1a458f10eb" Feb 16 21:29:17.571273 master-0 kubenswrapper[38936]: I0216 21:29:17.571186 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9ba4aeba55e35991fa1dbf1a458f10eb-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9ba4aeba55e35991fa1dbf1a458f10eb\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:17.571535 master-0 kubenswrapper[38936]: I0216 21:29:17.571354 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9ba4aeba55e35991fa1dbf1a458f10eb-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9ba4aeba55e35991fa1dbf1a458f10eb\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:17.571593 master-0 kubenswrapper[38936]: I0216 21:29:17.571480 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9ba4aeba55e35991fa1dbf1a458f10eb-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9ba4aeba55e35991fa1dbf1a458f10eb\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:17.571674 master-0 kubenswrapper[38936]: I0216 21:29:17.571645 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9ba4aeba55e35991fa1dbf1a458f10eb-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9ba4aeba55e35991fa1dbf1a458f10eb\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:17.673305 master-0 kubenswrapper[38936]: I0216 21:29:17.673189 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-cert-dir\") pod \"fc19ea17c4f595b135412c661d90b9a7\" (UID: \"fc19ea17c4f595b135412c661d90b9a7\") " Feb 16 21:29:17.673407 master-0 kubenswrapper[38936]: I0216 21:29:17.673356 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-resource-dir\") pod \"fc19ea17c4f595b135412c661d90b9a7\" (UID: \"fc19ea17c4f595b135412c661d90b9a7\") " Feb 16 21:29:17.673502 master-0 kubenswrapper[38936]: I0216 21:29:17.673469 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "fc19ea17c4f595b135412c661d90b9a7" (UID: "fc19ea17c4f595b135412c661d90b9a7"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:29:17.673552 master-0 kubenswrapper[38936]: I0216 21:29:17.673469 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "fc19ea17c4f595b135412c661d90b9a7" (UID: "fc19ea17c4f595b135412c661d90b9a7"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:29:17.674306 master-0 kubenswrapper[38936]: I0216 21:29:17.674261 38936 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:29:17.674359 master-0 kubenswrapper[38936]: I0216 21:29:17.674304 38936 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fc19ea17c4f595b135412c661d90b9a7-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:29:17.886249 master-0 kubenswrapper[38936]: I0216 21:29:17.886096 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc19ea17c4f595b135412c661d90b9a7" path="/var/lib/kubelet/pods/fc19ea17c4f595b135412c661d90b9a7/volumes" Feb 16 21:29:18.404019 master-0 kubenswrapper[38936]: I0216 21:29:18.403905 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_fc19ea17c4f595b135412c661d90b9a7/kube-controller-manager-cert-syncer/0.log" Feb 16 21:29:18.406455 master-0 kubenswrapper[38936]: I0216 21:29:18.406423 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_fc19ea17c4f595b135412c661d90b9a7/kube-controller-manager/0.log" Feb 16 21:29:18.406630 master-0 kubenswrapper[38936]: I0216 21:29:18.406597 38936 generic.go:334] "Generic (PLEG): container finished" podID="fc19ea17c4f595b135412c661d90b9a7" containerID="9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460" exitCode=0 Feb 16 21:29:18.406775 master-0 kubenswrapper[38936]: I0216 21:29:18.406760 38936 generic.go:334] "Generic (PLEG): container finished" podID="fc19ea17c4f595b135412c661d90b9a7" containerID="6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241" exitCode=0 Feb 16 21:29:18.406879 master-0 kubenswrapper[38936]: I0216 21:29:18.406866 38936 generic.go:334] "Generic (PLEG): container finished" podID="fc19ea17c4f595b135412c661d90b9a7" containerID="f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05" exitCode=2 Feb 16 21:29:18.406994 master-0 kubenswrapper[38936]: I0216 21:29:18.406980 38936 generic.go:334] "Generic (PLEG): container finished" podID="fc19ea17c4f595b135412c661d90b9a7" containerID="fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c" exitCode=0 Feb 16 21:29:18.407639 master-0 kubenswrapper[38936]: I0216 21:29:18.406793 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:18.407639 master-0 kubenswrapper[38936]: I0216 21:29:18.406713 38936 scope.go:117] "RemoveContainer" containerID="9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460" Feb 16 21:29:18.409786 master-0 kubenswrapper[38936]: I0216 21:29:18.409765 38936 generic.go:334] "Generic (PLEG): container finished" podID="94fca51c-425f-436b-b260-123e8baca2c0" containerID="b4a44b3a50542fbf463ed37fa6c5a6567124a3f9103a819cf1b133fa1c735bc0" exitCode=0 Feb 16 21:29:18.409879 master-0 kubenswrapper[38936]: I0216 21:29:18.409800 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"94fca51c-425f-436b-b260-123e8baca2c0","Type":"ContainerDied","Data":"b4a44b3a50542fbf463ed37fa6c5a6567124a3f9103a819cf1b133fa1c735bc0"} Feb 16 21:29:18.413888 master-0 kubenswrapper[38936]: I0216 21:29:18.413829 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="fc19ea17c4f595b135412c661d90b9a7" podUID="9ba4aeba55e35991fa1dbf1a458f10eb" Feb 16 21:29:18.441345 master-0 kubenswrapper[38936]: I0216 21:29:18.441274 38936 scope.go:117] "RemoveContainer" containerID="6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241" Feb 16 21:29:18.464903 master-0 kubenswrapper[38936]: I0216 21:29:18.464839 38936 scope.go:117] "RemoveContainer" containerID="f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05" Feb 16 21:29:18.483019 master-0 kubenswrapper[38936]: I0216 21:29:18.482979 38936 scope.go:117] "RemoveContainer" containerID="fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c" Feb 16 21:29:18.508717 master-0 kubenswrapper[38936]: I0216 21:29:18.508666 38936 scope.go:117] "RemoveContainer" containerID="6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a" Feb 16 21:29:18.528631 master-0 kubenswrapper[38936]: I0216 21:29:18.528529 38936 scope.go:117] "RemoveContainer" containerID="9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460" Feb 16 21:29:18.529080 master-0 kubenswrapper[38936]: E0216 21:29:18.529029 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460\": container with ID starting with 9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460 not found: ID does not exist" containerID="9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460" Feb 16 21:29:18.529130 master-0 kubenswrapper[38936]: I0216 21:29:18.529070 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460"} err="failed to get container status \"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460\": rpc error: code = NotFound desc = could not find container \"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460\": container with ID starting with 9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460 not found: ID does not exist" Feb 16 21:29:18.529130 master-0 kubenswrapper[38936]: I0216 21:29:18.529096 38936 scope.go:117] "RemoveContainer" containerID="6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241" Feb 16 21:29:18.529390 master-0 kubenswrapper[38936]: E0216 21:29:18.529356 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241\": container with ID starting with 6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241 not found: ID does not exist" containerID="6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241" Feb 16 21:29:18.529390 master-0 kubenswrapper[38936]: I0216 21:29:18.529387 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241"} err="failed to get container status \"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241\": rpc error: code = NotFound desc = could not find container \"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241\": container with ID starting with 6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241 not found: ID does not exist" Feb 16 21:29:18.529490 master-0 kubenswrapper[38936]: I0216 21:29:18.529410 38936 scope.go:117] "RemoveContainer" containerID="f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05" Feb 16 21:29:18.529945 master-0 kubenswrapper[38936]: E0216 21:29:18.529903 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05\": container with ID starting with f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05 not found: ID does not exist" containerID="f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05" Feb 16 21:29:18.529945 master-0 kubenswrapper[38936]: I0216 21:29:18.529933 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05"} err="failed to get container status \"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05\": rpc error: code = NotFound desc = could not find container \"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05\": container with ID starting with f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05 not found: ID does not exist" Feb 16 21:29:18.529945 master-0 kubenswrapper[38936]: I0216 21:29:18.529951 38936 scope.go:117] "RemoveContainer" containerID="fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c" Feb 16 21:29:18.530319 master-0 kubenswrapper[38936]: E0216 21:29:18.530286 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c\": container with ID starting with fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c not found: ID does not exist" containerID="fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c" Feb 16 21:29:18.530364 master-0 kubenswrapper[38936]: I0216 21:29:18.530312 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c"} err="failed to get container status \"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c\": rpc error: code = NotFound desc = could not find container \"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c\": container with ID starting with fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c not found: ID does not exist" Feb 16 21:29:18.530364 master-0 kubenswrapper[38936]: I0216 21:29:18.530335 38936 scope.go:117] "RemoveContainer" containerID="6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a" Feb 16 21:29:18.530816 master-0 kubenswrapper[38936]: E0216 21:29:18.530775 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a\": container with ID starting with 6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a not found: ID does not exist" containerID="6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a" Feb 16 21:29:18.530816 master-0 kubenswrapper[38936]: I0216 21:29:18.530807 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a"} err="failed to get container status \"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a\": rpc error: code = NotFound desc = could not find container \"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a\": container with ID starting with 6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a not found: ID does not exist" Feb 16 21:29:18.530920 master-0 kubenswrapper[38936]: I0216 21:29:18.530825 38936 scope.go:117] "RemoveContainer" containerID="9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460" Feb 16 21:29:18.531270 master-0 kubenswrapper[38936]: I0216 21:29:18.531171 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460"} err="failed to get container status \"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460\": rpc error: code = NotFound desc = could not find container \"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460\": container with ID starting with 9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460 not found: ID does not exist" Feb 16 21:29:18.531742 master-0 kubenswrapper[38936]: I0216 21:29:18.531271 38936 scope.go:117] "RemoveContainer" containerID="6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241" Feb 16 21:29:18.531855 master-0 kubenswrapper[38936]: I0216 21:29:18.531801 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241"} err="failed to get container status \"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241\": rpc error: code = NotFound desc = could not find container \"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241\": container with ID starting with 6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241 not found: ID does not exist" Feb 16 21:29:18.531956 master-0 kubenswrapper[38936]: I0216 21:29:18.531857 38936 scope.go:117] "RemoveContainer" containerID="f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05" Feb 16 21:29:18.532553 master-0 kubenswrapper[38936]: I0216 21:29:18.532515 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05"} err="failed to get container status \"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05\": rpc error: code = NotFound desc = could not find container \"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05\": container with ID starting with f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05 not found: ID does not exist" Feb 16 21:29:18.532553 master-0 kubenswrapper[38936]: I0216 21:29:18.532542 38936 scope.go:117] "RemoveContainer" containerID="fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c" Feb 16 21:29:18.532900 master-0 kubenswrapper[38936]: I0216 21:29:18.532869 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c"} err="failed to get container status \"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c\": rpc error: code = NotFound desc = could not find container \"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c\": container with ID starting with fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c not found: ID does not exist" Feb 16 21:29:18.532900 master-0 kubenswrapper[38936]: I0216 21:29:18.532892 38936 scope.go:117] "RemoveContainer" containerID="6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a" Feb 16 21:29:18.533576 master-0 kubenswrapper[38936]: I0216 21:29:18.533512 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a"} err="failed to get container status \"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a\": rpc error: code = NotFound desc = could not find container \"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a\": container with ID starting with 6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a not found: ID does not exist" Feb 16 21:29:18.533576 master-0 kubenswrapper[38936]: I0216 21:29:18.533532 38936 scope.go:117] "RemoveContainer" containerID="9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460" Feb 16 21:29:18.534044 master-0 kubenswrapper[38936]: I0216 21:29:18.534009 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460"} err="failed to get container status \"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460\": rpc error: code = NotFound desc = could not find container \"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460\": container with ID starting with 9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460 not found: ID does not exist" Feb 16 21:29:18.534044 master-0 kubenswrapper[38936]: I0216 21:29:18.534034 38936 scope.go:117] "RemoveContainer" containerID="6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241" Feb 16 21:29:18.534385 master-0 kubenswrapper[38936]: I0216 21:29:18.534351 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241"} err="failed to get container status \"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241\": rpc error: code = NotFound desc = could not find container \"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241\": container with ID starting with 6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241 not found: ID does not exist" Feb 16 21:29:18.534385 master-0 kubenswrapper[38936]: I0216 21:29:18.534372 38936 scope.go:117] "RemoveContainer" containerID="f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05" Feb 16 21:29:18.534786 master-0 kubenswrapper[38936]: I0216 21:29:18.534653 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05"} err="failed to get container status \"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05\": rpc error: code = NotFound desc = could not find container \"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05\": container with ID starting with f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05 not found: ID does not exist" Feb 16 21:29:18.534786 master-0 kubenswrapper[38936]: I0216 21:29:18.534684 38936 scope.go:117] "RemoveContainer" containerID="fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c" Feb 16 21:29:18.535089 master-0 kubenswrapper[38936]: I0216 21:29:18.534954 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c"} err="failed to get container status \"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c\": rpc error: code = NotFound desc = could not find container \"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c\": container with ID starting with fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c not found: ID does not exist" Feb 16 21:29:18.535089 master-0 kubenswrapper[38936]: I0216 21:29:18.534972 38936 scope.go:117] "RemoveContainer" containerID="6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a" Feb 16 21:29:18.535287 master-0 kubenswrapper[38936]: I0216 21:29:18.535251 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a"} err="failed to get container status \"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a\": rpc error: code = NotFound desc = could not find container \"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a\": container with ID starting with 6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a not found: ID does not exist" Feb 16 21:29:18.535287 master-0 kubenswrapper[38936]: I0216 21:29:18.535278 38936 scope.go:117] "RemoveContainer" containerID="9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460" Feb 16 21:29:18.535906 master-0 kubenswrapper[38936]: I0216 21:29:18.535845 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460"} err="failed to get container status \"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460\": rpc error: code = NotFound desc = could not find container \"9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460\": container with ID starting with 9b437fbc46896b61998fbaa887c424f4cbaa3b9bea1ab1818d21d68ec74ad460 not found: ID does not exist" Feb 16 21:29:18.535966 master-0 kubenswrapper[38936]: I0216 21:29:18.535906 38936 scope.go:117] "RemoveContainer" containerID="6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241" Feb 16 21:29:18.536416 master-0 kubenswrapper[38936]: I0216 21:29:18.536381 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241"} err="failed to get container status \"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241\": rpc error: code = NotFound desc = could not find container \"6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241\": container with ID starting with 6aa39b0da6cdb522f4b194b9bdb12c297e9e5c605fe46279cc839c2325dcd241 not found: ID does not exist" Feb 16 21:29:18.536487 master-0 kubenswrapper[38936]: I0216 21:29:18.536405 38936 scope.go:117] "RemoveContainer" containerID="f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05" Feb 16 21:29:18.537537 master-0 kubenswrapper[38936]: I0216 21:29:18.537490 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05"} err="failed to get container status \"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05\": rpc error: code = NotFound desc = could not find container \"f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05\": container with ID starting with f36c9e410dfab31e5f662f518532f2c63ad69427a8c261801865bf6c6bcd2e05 not found: ID does not exist" Feb 16 21:29:18.537691 master-0 kubenswrapper[38936]: I0216 21:29:18.537549 38936 scope.go:117] "RemoveContainer" containerID="fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c" Feb 16 21:29:18.537934 master-0 kubenswrapper[38936]: I0216 21:29:18.537894 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c"} err="failed to get container status \"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c\": rpc error: code = NotFound desc = could not find container \"fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c\": container with ID starting with fa976dcc1cc11104908d41145d991d77a2ae0e16bb902e681d4c3347632c080c not found: ID does not exist" Feb 16 21:29:18.537934 master-0 kubenswrapper[38936]: I0216 21:29:18.537926 38936 scope.go:117] "RemoveContainer" containerID="6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a" Feb 16 21:29:18.538582 master-0 kubenswrapper[38936]: I0216 21:29:18.538204 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a"} err="failed to get container status \"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a\": rpc error: code = NotFound desc = could not find container \"6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a\": container with ID starting with 6bdab7a1818a7e24abcce89bfd61e21806d954f511bf60e271e9f710baf7ee4a not found: ID does not exist" Feb 16 21:29:19.767789 master-0 kubenswrapper[38936]: I0216 21:29:19.764894 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:29:19.916188 master-0 kubenswrapper[38936]: I0216 21:29:19.916088 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-var-lock\") pod \"94fca51c-425f-436b-b260-123e8baca2c0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " Feb 16 21:29:19.916551 master-0 kubenswrapper[38936]: I0216 21:29:19.916261 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94fca51c-425f-436b-b260-123e8baca2c0-kube-api-access\") pod \"94fca51c-425f-436b-b260-123e8baca2c0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " Feb 16 21:29:19.916551 master-0 kubenswrapper[38936]: I0216 21:29:19.916257 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-var-lock" (OuterVolumeSpecName: "var-lock") pod "94fca51c-425f-436b-b260-123e8baca2c0" (UID: "94fca51c-425f-436b-b260-123e8baca2c0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:29:19.916551 master-0 kubenswrapper[38936]: I0216 21:29:19.916317 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-kubelet-dir\") pod \"94fca51c-425f-436b-b260-123e8baca2c0\" (UID: \"94fca51c-425f-436b-b260-123e8baca2c0\") " Feb 16 21:29:19.916551 master-0 kubenswrapper[38936]: I0216 21:29:19.916505 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "94fca51c-425f-436b-b260-123e8baca2c0" (UID: "94fca51c-425f-436b-b260-123e8baca2c0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:29:19.916828 master-0 kubenswrapper[38936]: I0216 21:29:19.916802 38936 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 21:29:19.916828 master-0 kubenswrapper[38936]: I0216 21:29:19.916821 38936 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94fca51c-425f-436b-b260-123e8baca2c0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:29:19.921289 master-0 kubenswrapper[38936]: I0216 21:29:19.920899 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94fca51c-425f-436b-b260-123e8baca2c0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "94fca51c-425f-436b-b260-123e8baca2c0" (UID: "94fca51c-425f-436b-b260-123e8baca2c0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:29:20.018427 master-0 kubenswrapper[38936]: I0216 21:29:20.018302 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94fca51c-425f-436b-b260-123e8baca2c0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 21:29:20.432606 master-0 kubenswrapper[38936]: I0216 21:29:20.432297 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"94fca51c-425f-436b-b260-123e8baca2c0","Type":"ContainerDied","Data":"fb4aa87c55da02a74a2341be0b832d568f78f04de2d2bb2d220f7257eaa6a873"} Feb 16 21:29:20.432606 master-0 kubenswrapper[38936]: I0216 21:29:20.432376 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb4aa87c55da02a74a2341be0b832d568f78f04de2d2bb2d220f7257eaa6a873" Feb 16 21:29:20.432606 master-0 kubenswrapper[38936]: I0216 21:29:20.432344 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 21:29:29.873588 master-0 kubenswrapper[38936]: I0216 21:29:29.873523 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:29.898218 master-0 kubenswrapper[38936]: I0216 21:29:29.898151 38936 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="06a8eb4b-f770-4598-aa9d-5c784e28048d" Feb 16 21:29:29.898218 master-0 kubenswrapper[38936]: I0216 21:29:29.898214 38936 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="06a8eb4b-f770-4598-aa9d-5c784e28048d" Feb 16 21:29:29.915048 master-0 kubenswrapper[38936]: I0216 21:29:29.914969 38936 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:29.917867 master-0 kubenswrapper[38936]: I0216 21:29:29.917821 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:29:29.925004 master-0 kubenswrapper[38936]: I0216 21:29:29.924937 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:29:29.933177 master-0 kubenswrapper[38936]: I0216 21:29:29.933107 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:29.940157 master-0 kubenswrapper[38936]: I0216 21:29:29.940088 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 21:29:30.532291 master-0 kubenswrapper[38936]: I0216 21:29:30.532216 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9ba4aeba55e35991fa1dbf1a458f10eb","Type":"ContainerStarted","Data":"0c1b8e26d99dee40f885f8741d4bfa737b446fd38b804ca3aeb9932336091031"} Feb 16 21:29:30.532409 master-0 kubenswrapper[38936]: I0216 21:29:30.532301 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9ba4aeba55e35991fa1dbf1a458f10eb","Type":"ContainerStarted","Data":"e143c78ff3ad26e22d616f1f027da7875d84e053916f3f601607d499fd8fc1f7"} Feb 16 21:29:30.532409 master-0 kubenswrapper[38936]: I0216 21:29:30.532323 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9ba4aeba55e35991fa1dbf1a458f10eb","Type":"ContainerStarted","Data":"a792402f96570745999f72289daee7558bc254938a965c1cd1f98b4094580e27"} Feb 16 21:29:31.550255 master-0 kubenswrapper[38936]: I0216 21:29:31.550184 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9ba4aeba55e35991fa1dbf1a458f10eb","Type":"ContainerStarted","Data":"1cb203c1485f47906b023a86da38819d82f6bba895ef278b37c635da103a645e"} Feb 16 21:29:31.550255 master-0 kubenswrapper[38936]: I0216 21:29:31.550245 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9ba4aeba55e35991fa1dbf1a458f10eb","Type":"ContainerStarted","Data":"85e9b22a2fca246aa2bc9e1c76051dc96a56067b0a6969fd0f67f04ea776f24a"} Feb 16 21:29:31.686359 master-0 kubenswrapper[38936]: I0216 21:29:31.686256 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.686230591 podStartE2EDuration="2.686230591s" podCreationTimestamp="2026-02-16 21:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:29:31.68468404 +0000 UTC m=+402.036687422" watchObservedRunningTime="2026-02-16 21:29:31.686230591 +0000 UTC m=+402.038233953" Feb 16 21:29:39.934078 master-0 kubenswrapper[38936]: I0216 21:29:39.934017 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:39.934078 master-0 kubenswrapper[38936]: I0216 21:29:39.934074 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:39.934078 master-0 kubenswrapper[38936]: I0216 21:29:39.934085 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:39.934078 master-0 kubenswrapper[38936]: I0216 21:29:39.934095 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:39.934967 master-0 kubenswrapper[38936]: I0216 21:29:39.934467 38936 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 21:29:39.934967 master-0 kubenswrapper[38936]: I0216 21:29:39.934540 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9ba4aeba55e35991fa1dbf1a458f10eb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 21:29:39.941940 master-0 kubenswrapper[38936]: I0216 21:29:39.941888 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:40.626628 master-0 kubenswrapper[38936]: I0216 21:29:40.626548 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:49.934953 master-0 kubenswrapper[38936]: I0216 21:29:49.934855 38936 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 21:29:49.936130 master-0 kubenswrapper[38936]: I0216 21:29:49.934973 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9ba4aeba55e35991fa1dbf1a458f10eb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 21:29:59.935472 master-0 kubenswrapper[38936]: I0216 21:29:59.935363 38936 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 16 21:29:59.936893 master-0 kubenswrapper[38936]: I0216 21:29:59.935493 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9ba4aeba55e35991fa1dbf1a458f10eb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 21:29:59.936893 master-0 kubenswrapper[38936]: I0216 21:29:59.935624 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:29:59.937631 master-0 kubenswrapper[38936]: I0216 21:29:59.937547 38936 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e143c78ff3ad26e22d616f1f027da7875d84e053916f3f601607d499fd8fc1f7"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 21:29:59.939227 master-0 kubenswrapper[38936]: I0216 21:29:59.937893 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="9ba4aeba55e35991fa1dbf1a458f10eb" containerName="kube-controller-manager" containerID="cri-o://e143c78ff3ad26e22d616f1f027da7875d84e053916f3f601607d499fd8fc1f7" gracePeriod=30 Feb 16 21:30:30.049531 master-0 kubenswrapper[38936]: I0216 21:30:30.049463 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9ba4aeba55e35991fa1dbf1a458f10eb/kube-controller-manager/0.log" Feb 16 21:30:30.049531 master-0 kubenswrapper[38936]: I0216 21:30:30.049534 38936 generic.go:334] "Generic (PLEG): container finished" podID="9ba4aeba55e35991fa1dbf1a458f10eb" containerID="e143c78ff3ad26e22d616f1f027da7875d84e053916f3f601607d499fd8fc1f7" exitCode=137 Feb 16 21:30:30.050332 master-0 kubenswrapper[38936]: I0216 21:30:30.049569 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9ba4aeba55e35991fa1dbf1a458f10eb","Type":"ContainerDied","Data":"e143c78ff3ad26e22d616f1f027da7875d84e053916f3f601607d499fd8fc1f7"} Feb 16 21:30:31.080717 master-0 kubenswrapper[38936]: I0216 21:30:31.080617 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9ba4aeba55e35991fa1dbf1a458f10eb/kube-controller-manager/0.log" Feb 16 21:30:31.080717 master-0 kubenswrapper[38936]: I0216 21:30:31.080721 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9ba4aeba55e35991fa1dbf1a458f10eb","Type":"ContainerStarted","Data":"f3771b04cccaf4a074c792ce120e5dc67d82b4ddaeb7bb5ae0d9b8772b90bd70"} Feb 16 21:30:39.934142 master-0 kubenswrapper[38936]: I0216 21:30:39.933965 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:30:39.934964 master-0 kubenswrapper[38936]: I0216 21:30:39.934894 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:30:39.940061 master-0 kubenswrapper[38936]: I0216 21:30:39.939871 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:30:40.156841 master-0 kubenswrapper[38936]: I0216 21:30:40.156795 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 21:30:48.014535 master-0 kubenswrapper[38936]: I0216 21:30:48.014466 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4"] Feb 16 21:30:48.015197 master-0 kubenswrapper[38936]: E0216 21:30:48.014816 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fca51c-425f-436b-b260-123e8baca2c0" containerName="installer" Feb 16 21:30:48.015197 master-0 kubenswrapper[38936]: I0216 21:30:48.014831 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fca51c-425f-436b-b260-123e8baca2c0" containerName="installer" Feb 16 21:30:48.015197 master-0 kubenswrapper[38936]: I0216 21:30:48.014947 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="94fca51c-425f-436b-b260-123e8baca2c0" containerName="installer" Feb 16 21:30:48.015456 master-0 kubenswrapper[38936]: I0216 21:30:48.015434 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:48.017229 master-0 kubenswrapper[38936]: I0216 21:30:48.017194 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-r6wp5" Feb 16 21:30:48.018245 master-0 kubenswrapper[38936]: I0216 21:30:48.017966 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:30:48.020839 master-0 kubenswrapper[38936]: I0216 21:30:48.020800 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4"] Feb 16 21:30:48.022793 master-0 kubenswrapper[38936]: I0216 21:30:48.022752 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:48.024967 master-0 kubenswrapper[38936]: I0216 21:30:48.024938 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-w7b8n" Feb 16 21:30:48.027872 master-0 kubenswrapper[38936]: I0216 21:30:48.027836 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4"] Feb 16 21:30:48.040092 master-0 kubenswrapper[38936]: I0216 21:30:48.040031 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4"] Feb 16 21:30:48.082812 master-0 kubenswrapper[38936]: I0216 21:30:48.082757 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:48.082812 master-0 kubenswrapper[38936]: I0216 21:30:48.082816 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27nd7\" (UniqueName: \"kubernetes.io/projected/8c6afa53-e159-420f-bb3a-4ad64440fa0a-kube-api-access-27nd7\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:48.083065 master-0 kubenswrapper[38936]: I0216 21:30:48.082839 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thwzx\" (UniqueName: \"kubernetes.io/projected/24a1c7d4-4d65-4047-b972-d85cce98fe48-kube-api-access-thwzx\") pod \"collect-profiles-29521290-b68r4\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:48.083065 master-0 kubenswrapper[38936]: I0216 21:30:48.082858 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24a1c7d4-4d65-4047-b972-d85cce98fe48-config-volume\") pod \"collect-profiles-29521290-b68r4\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:48.083065 master-0 kubenswrapper[38936]: I0216 21:30:48.082906 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:48.083288 master-0 kubenswrapper[38936]: I0216 21:30:48.083234 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24a1c7d4-4d65-4047-b972-d85cce98fe48-secret-volume\") pod \"collect-profiles-29521290-b68r4\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:48.185446 master-0 kubenswrapper[38936]: I0216 21:30:48.185336 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24a1c7d4-4d65-4047-b972-d85cce98fe48-secret-volume\") pod \"collect-profiles-29521290-b68r4\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:48.185935 master-0 kubenswrapper[38936]: I0216 21:30:48.185838 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:48.186043 master-0 kubenswrapper[38936]: I0216 21:30:48.185991 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thwzx\" (UniqueName: \"kubernetes.io/projected/24a1c7d4-4d65-4047-b972-d85cce98fe48-kube-api-access-thwzx\") pod \"collect-profiles-29521290-b68r4\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:48.186099 master-0 kubenswrapper[38936]: I0216 21:30:48.186079 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27nd7\" (UniqueName: \"kubernetes.io/projected/8c6afa53-e159-420f-bb3a-4ad64440fa0a-kube-api-access-27nd7\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:48.186321 master-0 kubenswrapper[38936]: I0216 21:30:48.186279 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24a1c7d4-4d65-4047-b972-d85cce98fe48-config-volume\") pod \"collect-profiles-29521290-b68r4\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:48.186424 master-0 kubenswrapper[38936]: I0216 21:30:48.186394 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:48.186569 master-0 kubenswrapper[38936]: I0216 21:30:48.186537 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:48.186888 master-0 kubenswrapper[38936]: I0216 21:30:48.186860 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:48.187470 master-0 kubenswrapper[38936]: I0216 21:30:48.187427 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24a1c7d4-4d65-4047-b972-d85cce98fe48-config-volume\") pod \"collect-profiles-29521290-b68r4\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:48.190295 master-0 kubenswrapper[38936]: I0216 21:30:48.190249 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24a1c7d4-4d65-4047-b972-d85cce98fe48-secret-volume\") pod \"collect-profiles-29521290-b68r4\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:48.206338 master-0 kubenswrapper[38936]: I0216 21:30:48.206286 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thwzx\" (UniqueName: \"kubernetes.io/projected/24a1c7d4-4d65-4047-b972-d85cce98fe48-kube-api-access-thwzx\") pod \"collect-profiles-29521290-b68r4\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:48.206980 master-0 kubenswrapper[38936]: I0216 21:30:48.206927 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27nd7\" (UniqueName: \"kubernetes.io/projected/8c6afa53-e159-420f-bb3a-4ad64440fa0a-kube-api-access-27nd7\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:48.334721 master-0 kubenswrapper[38936]: I0216 21:30:48.334569 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:48.346830 master-0 kubenswrapper[38936]: I0216 21:30:48.346793 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:48.806121 master-0 kubenswrapper[38936]: I0216 21:30:48.806021 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4"] Feb 16 21:30:48.809955 master-0 kubenswrapper[38936]: W0216 21:30:48.809886 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24a1c7d4_4d65_4047_b972_d85cce98fe48.slice/crio-14c0126b9b7a95e2eb957a81b3cd6fa4d885d50ee70ce4547310804969cbd337 WatchSource:0}: Error finding container 14c0126b9b7a95e2eb957a81b3cd6fa4d885d50ee70ce4547310804969cbd337: Status 404 returned error can't find the container with id 14c0126b9b7a95e2eb957a81b3cd6fa4d885d50ee70ce4547310804969cbd337 Feb 16 21:30:48.885537 master-0 kubenswrapper[38936]: I0216 21:30:48.885464 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4"] Feb 16 21:30:48.903928 master-0 kubenswrapper[38936]: W0216 21:30:48.903485 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c6afa53_e159_420f_bb3a_4ad64440fa0a.slice/crio-17e54e67691ec51bddf8f12075fbe062a19ff04fc7ebc288a335191c9744712b WatchSource:0}: Error finding container 17e54e67691ec51bddf8f12075fbe062a19ff04fc7ebc288a335191c9744712b: Status 404 returned error can't find the container with id 17e54e67691ec51bddf8f12075fbe062a19ff04fc7ebc288a335191c9744712b Feb 16 21:30:49.232672 master-0 kubenswrapper[38936]: I0216 21:30:49.232600 38936 generic.go:334] "Generic (PLEG): container finished" podID="8c6afa53-e159-420f-bb3a-4ad64440fa0a" containerID="3a1c8e026a527f4235e87cb837c520a7efb77d12a29dc36d6b8d9427ac920c73" exitCode=0 Feb 16 21:30:49.233237 master-0 kubenswrapper[38936]: I0216 21:30:49.232673 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" event={"ID":"8c6afa53-e159-420f-bb3a-4ad64440fa0a","Type":"ContainerDied","Data":"3a1c8e026a527f4235e87cb837c520a7efb77d12a29dc36d6b8d9427ac920c73"} Feb 16 21:30:49.233237 master-0 kubenswrapper[38936]: I0216 21:30:49.232742 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" event={"ID":"8c6afa53-e159-420f-bb3a-4ad64440fa0a","Type":"ContainerStarted","Data":"17e54e67691ec51bddf8f12075fbe062a19ff04fc7ebc288a335191c9744712b"} Feb 16 21:30:49.235216 master-0 kubenswrapper[38936]: I0216 21:30:49.235146 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" event={"ID":"24a1c7d4-4d65-4047-b972-d85cce98fe48","Type":"ContainerDied","Data":"51c2317c24ff00faccacb193244105e3ec64f883868aa13130510e611024da6e"} Feb 16 21:30:49.235328 master-0 kubenswrapper[38936]: I0216 21:30:49.235238 38936 generic.go:334] "Generic (PLEG): container finished" podID="24a1c7d4-4d65-4047-b972-d85cce98fe48" containerID="51c2317c24ff00faccacb193244105e3ec64f883868aa13130510e611024da6e" exitCode=0 Feb 16 21:30:49.235328 master-0 kubenswrapper[38936]: I0216 21:30:49.235311 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" event={"ID":"24a1c7d4-4d65-4047-b972-d85cce98fe48","Type":"ContainerStarted","Data":"14c0126b9b7a95e2eb957a81b3cd6fa4d885d50ee70ce4547310804969cbd337"} Feb 16 21:30:50.601157 master-0 kubenswrapper[38936]: I0216 21:30:50.601100 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:50.625256 master-0 kubenswrapper[38936]: I0216 21:30:50.625199 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24a1c7d4-4d65-4047-b972-d85cce98fe48-config-volume\") pod \"24a1c7d4-4d65-4047-b972-d85cce98fe48\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " Feb 16 21:30:50.625372 master-0 kubenswrapper[38936]: I0216 21:30:50.625289 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thwzx\" (UniqueName: \"kubernetes.io/projected/24a1c7d4-4d65-4047-b972-d85cce98fe48-kube-api-access-thwzx\") pod \"24a1c7d4-4d65-4047-b972-d85cce98fe48\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " Feb 16 21:30:50.625415 master-0 kubenswrapper[38936]: I0216 21:30:50.625403 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24a1c7d4-4d65-4047-b972-d85cce98fe48-secret-volume\") pod \"24a1c7d4-4d65-4047-b972-d85cce98fe48\" (UID: \"24a1c7d4-4d65-4047-b972-d85cce98fe48\") " Feb 16 21:30:50.625912 master-0 kubenswrapper[38936]: I0216 21:30:50.625855 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a1c7d4-4d65-4047-b972-d85cce98fe48-config-volume" (OuterVolumeSpecName: "config-volume") pod "24a1c7d4-4d65-4047-b972-d85cce98fe48" (UID: "24a1c7d4-4d65-4047-b972-d85cce98fe48"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:30:50.630423 master-0 kubenswrapper[38936]: I0216 21:30:50.630373 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a1c7d4-4d65-4047-b972-d85cce98fe48-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24a1c7d4-4d65-4047-b972-d85cce98fe48" (UID: "24a1c7d4-4d65-4047-b972-d85cce98fe48"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:30:50.631404 master-0 kubenswrapper[38936]: I0216 21:30:50.631331 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a1c7d4-4d65-4047-b972-d85cce98fe48-kube-api-access-thwzx" (OuterVolumeSpecName: "kube-api-access-thwzx") pod "24a1c7d4-4d65-4047-b972-d85cce98fe48" (UID: "24a1c7d4-4d65-4047-b972-d85cce98fe48"). InnerVolumeSpecName "kube-api-access-thwzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:30:50.726858 master-0 kubenswrapper[38936]: I0216 21:30:50.726803 38936 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24a1c7d4-4d65-4047-b972-d85cce98fe48-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 21:30:50.726858 master-0 kubenswrapper[38936]: I0216 21:30:50.726841 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thwzx\" (UniqueName: \"kubernetes.io/projected/24a1c7d4-4d65-4047-b972-d85cce98fe48-kube-api-access-thwzx\") on node \"master-0\" DevicePath \"\"" Feb 16 21:30:50.726858 master-0 kubenswrapper[38936]: I0216 21:30:50.726854 38936 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24a1c7d4-4d65-4047-b972-d85cce98fe48-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 21:30:51.250366 master-0 kubenswrapper[38936]: I0216 21:30:51.250298 38936 generic.go:334] "Generic (PLEG): container finished" podID="8c6afa53-e159-420f-bb3a-4ad64440fa0a" containerID="765f856019c72acbbecca339a08ff2c399c933305ca6e47efe7a96dd72fada01" exitCode=0 Feb 16 21:30:51.250588 master-0 kubenswrapper[38936]: I0216 21:30:51.250362 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" event={"ID":"8c6afa53-e159-420f-bb3a-4ad64440fa0a","Type":"ContainerDied","Data":"765f856019c72acbbecca339a08ff2c399c933305ca6e47efe7a96dd72fada01"} Feb 16 21:30:51.252349 master-0 kubenswrapper[38936]: I0216 21:30:51.252301 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" event={"ID":"24a1c7d4-4d65-4047-b972-d85cce98fe48","Type":"ContainerDied","Data":"14c0126b9b7a95e2eb957a81b3cd6fa4d885d50ee70ce4547310804969cbd337"} Feb 16 21:30:51.252435 master-0 kubenswrapper[38936]: I0216 21:30:51.252358 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14c0126b9b7a95e2eb957a81b3cd6fa4d885d50ee70ce4547310804969cbd337" Feb 16 21:30:51.252485 master-0 kubenswrapper[38936]: I0216 21:30:51.252436 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4" Feb 16 21:30:52.264312 master-0 kubenswrapper[38936]: I0216 21:30:52.264253 38936 generic.go:334] "Generic (PLEG): container finished" podID="8c6afa53-e159-420f-bb3a-4ad64440fa0a" containerID="f4fcb98d98bd77acb098967e902dc43fdf8e2e01d4d5c90cfaabd95b2f4ea02c" exitCode=0 Feb 16 21:30:52.264312 master-0 kubenswrapper[38936]: I0216 21:30:52.264304 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" event={"ID":"8c6afa53-e159-420f-bb3a-4ad64440fa0a","Type":"ContainerDied","Data":"f4fcb98d98bd77acb098967e902dc43fdf8e2e01d4d5c90cfaabd95b2f4ea02c"} Feb 16 21:30:53.634320 master-0 kubenswrapper[38936]: I0216 21:30:53.634252 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:30:53.674280 master-0 kubenswrapper[38936]: I0216 21:30:53.674194 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-util\") pod \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " Feb 16 21:30:53.674817 master-0 kubenswrapper[38936]: I0216 21:30:53.674791 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27nd7\" (UniqueName: \"kubernetes.io/projected/8c6afa53-e159-420f-bb3a-4ad64440fa0a-kube-api-access-27nd7\") pod \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " Feb 16 21:30:53.675019 master-0 kubenswrapper[38936]: I0216 21:30:53.675000 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-bundle\") pod \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\" (UID: \"8c6afa53-e159-420f-bb3a-4ad64440fa0a\") " Feb 16 21:30:53.676076 master-0 kubenswrapper[38936]: I0216 21:30:53.676000 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-bundle" (OuterVolumeSpecName: "bundle") pod "8c6afa53-e159-420f-bb3a-4ad64440fa0a" (UID: "8c6afa53-e159-420f-bb3a-4ad64440fa0a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:30:53.677740 master-0 kubenswrapper[38936]: I0216 21:30:53.677671 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c6afa53-e159-420f-bb3a-4ad64440fa0a-kube-api-access-27nd7" (OuterVolumeSpecName: "kube-api-access-27nd7") pod "8c6afa53-e159-420f-bb3a-4ad64440fa0a" (UID: "8c6afa53-e159-420f-bb3a-4ad64440fa0a"). InnerVolumeSpecName "kube-api-access-27nd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:30:53.691422 master-0 kubenswrapper[38936]: I0216 21:30:53.691330 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-util" (OuterVolumeSpecName: "util") pod "8c6afa53-e159-420f-bb3a-4ad64440fa0a" (UID: "8c6afa53-e159-420f-bb3a-4ad64440fa0a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:30:53.777638 master-0 kubenswrapper[38936]: I0216 21:30:53.777542 38936 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-util\") on node \"master-0\" DevicePath \"\"" Feb 16 21:30:53.777638 master-0 kubenswrapper[38936]: I0216 21:30:53.777594 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27nd7\" (UniqueName: \"kubernetes.io/projected/8c6afa53-e159-420f-bb3a-4ad64440fa0a-kube-api-access-27nd7\") on node \"master-0\" DevicePath \"\"" Feb 16 21:30:53.777638 master-0 kubenswrapper[38936]: I0216 21:30:53.777605 38936 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8c6afa53-e159-420f-bb3a-4ad64440fa0a-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:30:54.286341 master-0 kubenswrapper[38936]: I0216 21:30:54.286245 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" event={"ID":"8c6afa53-e159-420f-bb3a-4ad64440fa0a","Type":"ContainerDied","Data":"17e54e67691ec51bddf8f12075fbe062a19ff04fc7ebc288a335191c9744712b"} Feb 16 21:30:54.286341 master-0 kubenswrapper[38936]: I0216 21:30:54.286340 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e54e67691ec51bddf8f12075fbe062a19ff04fc7ebc288a335191c9744712b" Feb 16 21:30:54.286816 master-0 kubenswrapper[38936]: I0216 21:30:54.286419 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4" Feb 16 21:31:01.969265 master-0 kubenswrapper[38936]: I0216 21:31:01.969199 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-d88c7bb97-t9xpf"] Feb 16 21:31:01.970065 master-0 kubenswrapper[38936]: E0216 21:31:01.969495 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6afa53-e159-420f-bb3a-4ad64440fa0a" containerName="pull" Feb 16 21:31:01.970065 master-0 kubenswrapper[38936]: I0216 21:31:01.969508 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6afa53-e159-420f-bb3a-4ad64440fa0a" containerName="pull" Feb 16 21:31:01.970065 master-0 kubenswrapper[38936]: E0216 21:31:01.969545 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6afa53-e159-420f-bb3a-4ad64440fa0a" containerName="util" Feb 16 21:31:01.970065 master-0 kubenswrapper[38936]: I0216 21:31:01.969551 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6afa53-e159-420f-bb3a-4ad64440fa0a" containerName="util" Feb 16 21:31:01.970065 master-0 kubenswrapper[38936]: E0216 21:31:01.969567 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a1c7d4-4d65-4047-b972-d85cce98fe48" containerName="collect-profiles" Feb 16 21:31:01.970065 master-0 kubenswrapper[38936]: I0216 21:31:01.969574 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a1c7d4-4d65-4047-b972-d85cce98fe48" containerName="collect-profiles" Feb 16 21:31:01.970065 master-0 kubenswrapper[38936]: E0216 21:31:01.969585 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6afa53-e159-420f-bb3a-4ad64440fa0a" containerName="extract" Feb 16 21:31:01.970065 master-0 kubenswrapper[38936]: I0216 21:31:01.969592 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6afa53-e159-420f-bb3a-4ad64440fa0a" containerName="extract" Feb 16 21:31:01.970305 master-0 kubenswrapper[38936]: I0216 21:31:01.970278 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6afa53-e159-420f-bb3a-4ad64440fa0a" containerName="extract" Feb 16 21:31:01.970348 master-0 kubenswrapper[38936]: I0216 21:31:01.970328 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a1c7d4-4d65-4047-b972-d85cce98fe48" containerName="collect-profiles" Feb 16 21:31:01.970881 master-0 kubenswrapper[38936]: I0216 21:31:01.970856 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:01.974494 master-0 kubenswrapper[38936]: I0216 21:31:01.974440 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Feb 16 21:31:01.974578 master-0 kubenswrapper[38936]: I0216 21:31:01.974530 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Feb 16 21:31:01.974839 master-0 kubenswrapper[38936]: I0216 21:31:01.974812 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Feb 16 21:31:01.975211 master-0 kubenswrapper[38936]: I0216 21:31:01.975197 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Feb 16 21:31:01.975622 master-0 kubenswrapper[38936]: I0216 21:31:01.975580 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Feb 16 21:31:01.998418 master-0 kubenswrapper[38936]: I0216 21:31:01.998167 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-d88c7bb97-t9xpf"] Feb 16 21:31:02.041713 master-0 kubenswrapper[38936]: I0216 21:31:02.041621 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghlms\" (UniqueName: \"kubernetes.io/projected/0190a228-73bd-4601-81ea-31b07e7ce437-kube-api-access-ghlms\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.041713 master-0 kubenswrapper[38936]: I0216 21:31:02.041715 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/0190a228-73bd-4601-81ea-31b07e7ce437-metrics-cert\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.041964 master-0 kubenswrapper[38936]: I0216 21:31:02.041740 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0190a228-73bd-4601-81ea-31b07e7ce437-socket-dir\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.041964 master-0 kubenswrapper[38936]: I0216 21:31:02.041791 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0190a228-73bd-4601-81ea-31b07e7ce437-apiservice-cert\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.041964 master-0 kubenswrapper[38936]: I0216 21:31:02.041857 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0190a228-73bd-4601-81ea-31b07e7ce437-webhook-cert\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.144608 master-0 kubenswrapper[38936]: I0216 21:31:02.144549 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0190a228-73bd-4601-81ea-31b07e7ce437-webhook-cert\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.144608 master-0 kubenswrapper[38936]: I0216 21:31:02.144615 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghlms\" (UniqueName: \"kubernetes.io/projected/0190a228-73bd-4601-81ea-31b07e7ce437-kube-api-access-ghlms\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.144908 master-0 kubenswrapper[38936]: I0216 21:31:02.144636 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/0190a228-73bd-4601-81ea-31b07e7ce437-metrics-cert\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.144908 master-0 kubenswrapper[38936]: I0216 21:31:02.144674 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0190a228-73bd-4601-81ea-31b07e7ce437-socket-dir\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.145073 master-0 kubenswrapper[38936]: I0216 21:31:02.145045 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0190a228-73bd-4601-81ea-31b07e7ce437-apiservice-cert\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.146180 master-0 kubenswrapper[38936]: I0216 21:31:02.145933 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0190a228-73bd-4601-81ea-31b07e7ce437-socket-dir\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.148537 master-0 kubenswrapper[38936]: I0216 21:31:02.148485 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0190a228-73bd-4601-81ea-31b07e7ce437-webhook-cert\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.149284 master-0 kubenswrapper[38936]: I0216 21:31:02.149224 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/0190a228-73bd-4601-81ea-31b07e7ce437-metrics-cert\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.155565 master-0 kubenswrapper[38936]: I0216 21:31:02.155504 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0190a228-73bd-4601-81ea-31b07e7ce437-apiservice-cert\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.168669 master-0 kubenswrapper[38936]: I0216 21:31:02.168585 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghlms\" (UniqueName: \"kubernetes.io/projected/0190a228-73bd-4601-81ea-31b07e7ce437-kube-api-access-ghlms\") pod \"lvms-operator-d88c7bb97-t9xpf\" (UID: \"0190a228-73bd-4601-81ea-31b07e7ce437\") " pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.286159 master-0 kubenswrapper[38936]: I0216 21:31:02.286089 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:02.753225 master-0 kubenswrapper[38936]: I0216 21:31:02.753160 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-d88c7bb97-t9xpf"] Feb 16 21:31:02.757885 master-0 kubenswrapper[38936]: W0216 21:31:02.757822 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0190a228_73bd_4601_81ea_31b07e7ce437.slice/crio-86d335af488593df2a2719ecf574b9e7412fd78979a60fff0967c7b69efae4c4 WatchSource:0}: Error finding container 86d335af488593df2a2719ecf574b9e7412fd78979a60fff0967c7b69efae4c4: Status 404 returned error can't find the container with id 86d335af488593df2a2719ecf574b9e7412fd78979a60fff0967c7b69efae4c4 Feb 16 21:31:03.384556 master-0 kubenswrapper[38936]: I0216 21:31:03.384466 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" event={"ID":"0190a228-73bd-4601-81ea-31b07e7ce437","Type":"ContainerStarted","Data":"86d335af488593df2a2719ecf574b9e7412fd78979a60fff0967c7b69efae4c4"} Feb 16 21:31:08.426118 master-0 kubenswrapper[38936]: I0216 21:31:08.426063 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" event={"ID":"0190a228-73bd-4601-81ea-31b07e7ce437","Type":"ContainerStarted","Data":"ce2d0ba2f8f9905ba8354daf32b3e2f76c604ca6881908d862e68096940dd9b3"} Feb 16 21:31:08.426759 master-0 kubenswrapper[38936]: I0216 21:31:08.426390 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:08.432473 master-0 kubenswrapper[38936]: I0216 21:31:08.432414 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" Feb 16 21:31:08.478800 master-0 kubenswrapper[38936]: I0216 21:31:08.478686 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-d88c7bb97-t9xpf" podStartSLOduration=2.872426263 podStartE2EDuration="7.478632367s" podCreationTimestamp="2026-02-16 21:31:01 +0000 UTC" firstStartedPulling="2026-02-16 21:31:02.760729325 +0000 UTC m=+493.112732687" lastFinishedPulling="2026-02-16 21:31:07.366935419 +0000 UTC m=+497.718938791" observedRunningTime="2026-02-16 21:31:08.451674928 +0000 UTC m=+498.803678320" watchObservedRunningTime="2026-02-16 21:31:08.478632367 +0000 UTC m=+498.830635739" Feb 16 21:31:12.191520 master-0 kubenswrapper[38936]: I0216 21:31:12.191416 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj"] Feb 16 21:31:12.193220 master-0 kubenswrapper[38936]: I0216 21:31:12.193185 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:12.195962 master-0 kubenswrapper[38936]: I0216 21:31:12.195911 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-w7b8n" Feb 16 21:31:12.207799 master-0 kubenswrapper[38936]: I0216 21:31:12.206489 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj"] Feb 16 21:31:12.339645 master-0 kubenswrapper[38936]: I0216 21:31:12.339374 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:12.339901 master-0 kubenswrapper[38936]: I0216 21:31:12.339675 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:12.339901 master-0 kubenswrapper[38936]: I0216 21:31:12.339764 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hg4g\" (UniqueName: \"kubernetes.io/projected/b997c4b8-dbd3-44f3-8b30-2617a449f54d-kube-api-access-4hg4g\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:12.441490 master-0 kubenswrapper[38936]: I0216 21:31:12.441439 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:12.441946 master-0 kubenswrapper[38936]: I0216 21:31:12.441901 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hg4g\" (UniqueName: \"kubernetes.io/projected/b997c4b8-dbd3-44f3-8b30-2617a449f54d-kube-api-access-4hg4g\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:12.442098 master-0 kubenswrapper[38936]: I0216 21:31:12.442082 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:12.442330 master-0 kubenswrapper[38936]: I0216 21:31:12.442274 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:12.443365 master-0 kubenswrapper[38936]: I0216 21:31:12.443286 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:12.459440 master-0 kubenswrapper[38936]: I0216 21:31:12.459385 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hg4g\" (UniqueName: \"kubernetes.io/projected/b997c4b8-dbd3-44f3-8b30-2617a449f54d-kube-api-access-4hg4g\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:12.511034 master-0 kubenswrapper[38936]: I0216 21:31:12.510945 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:12.799688 master-0 kubenswrapper[38936]: I0216 21:31:12.799588 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8"] Feb 16 21:31:12.801857 master-0 kubenswrapper[38936]: I0216 21:31:12.801823 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:12.810679 master-0 kubenswrapper[38936]: I0216 21:31:12.810612 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8"] Feb 16 21:31:12.955691 master-0 kubenswrapper[38936]: I0216 21:31:12.954483 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xhv4\" (UniqueName: \"kubernetes.io/projected/280362df-b3d3-406e-9f0e-e6f53aba3d53-kube-api-access-5xhv4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:12.955691 master-0 kubenswrapper[38936]: I0216 21:31:12.954638 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:12.955691 master-0 kubenswrapper[38936]: I0216 21:31:12.954708 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:13.028203 master-0 kubenswrapper[38936]: I0216 21:31:13.028121 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj"] Feb 16 21:31:13.032779 master-0 kubenswrapper[38936]: W0216 21:31:13.032067 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb997c4b8_dbd3_44f3_8b30_2617a449f54d.slice/crio-03d651ec36ba79fc24b36196ae68a102bbf33ed71ad77aa95a4cde7d9e518402 WatchSource:0}: Error finding container 03d651ec36ba79fc24b36196ae68a102bbf33ed71ad77aa95a4cde7d9e518402: Status 404 returned error can't find the container with id 03d651ec36ba79fc24b36196ae68a102bbf33ed71ad77aa95a4cde7d9e518402 Feb 16 21:31:13.056572 master-0 kubenswrapper[38936]: I0216 21:31:13.056426 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xhv4\" (UniqueName: \"kubernetes.io/projected/280362df-b3d3-406e-9f0e-e6f53aba3d53-kube-api-access-5xhv4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:13.056939 master-0 kubenswrapper[38936]: I0216 21:31:13.056571 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:13.056939 master-0 kubenswrapper[38936]: I0216 21:31:13.056614 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:13.057350 master-0 kubenswrapper[38936]: I0216 21:31:13.057309 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:13.057513 master-0 kubenswrapper[38936]: I0216 21:31:13.057400 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:13.079725 master-0 kubenswrapper[38936]: I0216 21:31:13.079634 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xhv4\" (UniqueName: \"kubernetes.io/projected/280362df-b3d3-406e-9f0e-e6f53aba3d53-kube-api-access-5xhv4\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:13.118038 master-0 kubenswrapper[38936]: I0216 21:31:13.117969 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:13.466380 master-0 kubenswrapper[38936]: I0216 21:31:13.466236 38936 generic.go:334] "Generic (PLEG): container finished" podID="b997c4b8-dbd3-44f3-8b30-2617a449f54d" containerID="86a49c18ac03c95f759b14a99389f01cc42893d6734a8c1e2a32a61103ff1c50" exitCode=0 Feb 16 21:31:13.466380 master-0 kubenswrapper[38936]: I0216 21:31:13.466300 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" event={"ID":"b997c4b8-dbd3-44f3-8b30-2617a449f54d","Type":"ContainerDied","Data":"86a49c18ac03c95f759b14a99389f01cc42893d6734a8c1e2a32a61103ff1c50"} Feb 16 21:31:13.466380 master-0 kubenswrapper[38936]: I0216 21:31:13.466372 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" event={"ID":"b997c4b8-dbd3-44f3-8b30-2617a449f54d","Type":"ContainerStarted","Data":"03d651ec36ba79fc24b36196ae68a102bbf33ed71ad77aa95a4cde7d9e518402"} Feb 16 21:31:13.508450 master-0 kubenswrapper[38936]: I0216 21:31:13.506520 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8"] Feb 16 21:31:13.508450 master-0 kubenswrapper[38936]: W0216 21:31:13.507926 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod280362df_b3d3_406e_9f0e_e6f53aba3d53.slice/crio-583129fd3b2d0955bdc7891c3ee71b32ba6e65e8d1d5bc0ce8c68b24cc3e54fc WatchSource:0}: Error finding container 583129fd3b2d0955bdc7891c3ee71b32ba6e65e8d1d5bc0ce8c68b24cc3e54fc: Status 404 returned error can't find the container with id 583129fd3b2d0955bdc7891c3ee71b32ba6e65e8d1d5bc0ce8c68b24cc3e54fc Feb 16 21:31:13.584658 master-0 kubenswrapper[38936]: I0216 21:31:13.584595 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5"] Feb 16 21:31:13.586355 master-0 kubenswrapper[38936]: I0216 21:31:13.586320 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:13.597413 master-0 kubenswrapper[38936]: I0216 21:31:13.597340 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5"] Feb 16 21:31:13.666830 master-0 kubenswrapper[38936]: I0216 21:31:13.666734 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l46gf\" (UniqueName: \"kubernetes.io/projected/345387c1-fe5b-48c5-86f5-b21ed7619c0d-kube-api-access-l46gf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:13.666830 master-0 kubenswrapper[38936]: I0216 21:31:13.666829 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:13.667089 master-0 kubenswrapper[38936]: I0216 21:31:13.666865 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:13.768257 master-0 kubenswrapper[38936]: I0216 21:31:13.768186 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l46gf\" (UniqueName: \"kubernetes.io/projected/345387c1-fe5b-48c5-86f5-b21ed7619c0d-kube-api-access-l46gf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:13.768257 master-0 kubenswrapper[38936]: I0216 21:31:13.768252 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:13.768602 master-0 kubenswrapper[38936]: I0216 21:31:13.768528 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:13.768825 master-0 kubenswrapper[38936]: I0216 21:31:13.768799 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:13.769234 master-0 kubenswrapper[38936]: I0216 21:31:13.769192 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:13.790009 master-0 kubenswrapper[38936]: I0216 21:31:13.789956 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l46gf\" (UniqueName: \"kubernetes.io/projected/345387c1-fe5b-48c5-86f5-b21ed7619c0d-kube-api-access-l46gf\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:13.919679 master-0 kubenswrapper[38936]: I0216 21:31:13.919601 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:14.363835 master-0 kubenswrapper[38936]: I0216 21:31:14.363775 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5"] Feb 16 21:31:14.368219 master-0 kubenswrapper[38936]: W0216 21:31:14.368075 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod345387c1_fe5b_48c5_86f5_b21ed7619c0d.slice/crio-4ed735cb1c9d83257ce42484c23f1720473daffc3dfed3108aec8a2f14733e0f WatchSource:0}: Error finding container 4ed735cb1c9d83257ce42484c23f1720473daffc3dfed3108aec8a2f14733e0f: Status 404 returned error can't find the container with id 4ed735cb1c9d83257ce42484c23f1720473daffc3dfed3108aec8a2f14733e0f Feb 16 21:31:14.475203 master-0 kubenswrapper[38936]: I0216 21:31:14.475149 38936 generic.go:334] "Generic (PLEG): container finished" podID="280362df-b3d3-406e-9f0e-e6f53aba3d53" containerID="686a7127671b57fd3aac29c6039795d0a7010b6f0d02d50eb142fe950019f7c6" exitCode=0 Feb 16 21:31:14.475768 master-0 kubenswrapper[38936]: I0216 21:31:14.475550 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" event={"ID":"280362df-b3d3-406e-9f0e-e6f53aba3d53","Type":"ContainerDied","Data":"686a7127671b57fd3aac29c6039795d0a7010b6f0d02d50eb142fe950019f7c6"} Feb 16 21:31:14.475768 master-0 kubenswrapper[38936]: I0216 21:31:14.475608 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" event={"ID":"280362df-b3d3-406e-9f0e-e6f53aba3d53","Type":"ContainerStarted","Data":"583129fd3b2d0955bdc7891c3ee71b32ba6e65e8d1d5bc0ce8c68b24cc3e54fc"} Feb 16 21:31:14.477761 master-0 kubenswrapper[38936]: I0216 21:31:14.477727 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" event={"ID":"345387c1-fe5b-48c5-86f5-b21ed7619c0d","Type":"ContainerStarted","Data":"4ed735cb1c9d83257ce42484c23f1720473daffc3dfed3108aec8a2f14733e0f"} Feb 16 21:31:15.486071 master-0 kubenswrapper[38936]: I0216 21:31:15.486010 38936 generic.go:334] "Generic (PLEG): container finished" podID="345387c1-fe5b-48c5-86f5-b21ed7619c0d" containerID="241865f2f56c2442a0770924134332be7d6f289dd8191cd88e38cc7bb58df3c2" exitCode=0 Feb 16 21:31:15.486071 master-0 kubenswrapper[38936]: I0216 21:31:15.486066 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" event={"ID":"345387c1-fe5b-48c5-86f5-b21ed7619c0d","Type":"ContainerDied","Data":"241865f2f56c2442a0770924134332be7d6f289dd8191cd88e38cc7bb58df3c2"} Feb 16 21:31:17.508288 master-0 kubenswrapper[38936]: I0216 21:31:17.508216 38936 generic.go:334] "Generic (PLEG): container finished" podID="345387c1-fe5b-48c5-86f5-b21ed7619c0d" containerID="6129cee24903b242a7c19418cd7921078527520110d762807c655b70fcffbdcf" exitCode=0 Feb 16 21:31:17.508828 master-0 kubenswrapper[38936]: I0216 21:31:17.508319 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" event={"ID":"345387c1-fe5b-48c5-86f5-b21ed7619c0d","Type":"ContainerDied","Data":"6129cee24903b242a7c19418cd7921078527520110d762807c655b70fcffbdcf"} Feb 16 21:31:17.529247 master-0 kubenswrapper[38936]: I0216 21:31:17.529182 38936 generic.go:334] "Generic (PLEG): container finished" podID="b997c4b8-dbd3-44f3-8b30-2617a449f54d" containerID="92653d481950d040e64053eb0182b06fe60cee3ebfb9b7c32d4a35a243fa4e5e" exitCode=0 Feb 16 21:31:17.529447 master-0 kubenswrapper[38936]: I0216 21:31:17.529270 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" event={"ID":"b997c4b8-dbd3-44f3-8b30-2617a449f54d","Type":"ContainerDied","Data":"92653d481950d040e64053eb0182b06fe60cee3ebfb9b7c32d4a35a243fa4e5e"} Feb 16 21:31:17.532367 master-0 kubenswrapper[38936]: I0216 21:31:17.532328 38936 generic.go:334] "Generic (PLEG): container finished" podID="280362df-b3d3-406e-9f0e-e6f53aba3d53" containerID="a6bad8bd7d468b55076261e1a9359aec0c074d277e3830fd51a429117ccac2ec" exitCode=0 Feb 16 21:31:17.532367 master-0 kubenswrapper[38936]: I0216 21:31:17.532356 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" event={"ID":"280362df-b3d3-406e-9f0e-e6f53aba3d53","Type":"ContainerDied","Data":"a6bad8bd7d468b55076261e1a9359aec0c074d277e3830fd51a429117ccac2ec"} Feb 16 21:31:18.543098 master-0 kubenswrapper[38936]: I0216 21:31:18.543019 38936 generic.go:334] "Generic (PLEG): container finished" podID="280362df-b3d3-406e-9f0e-e6f53aba3d53" containerID="2b9cddd5c2c08f98105ec1d2c373e3c4facc046d5d590b6a1da6d24a87f2634a" exitCode=0 Feb 16 21:31:18.543670 master-0 kubenswrapper[38936]: I0216 21:31:18.543140 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" event={"ID":"280362df-b3d3-406e-9f0e-e6f53aba3d53","Type":"ContainerDied","Data":"2b9cddd5c2c08f98105ec1d2c373e3c4facc046d5d590b6a1da6d24a87f2634a"} Feb 16 21:31:18.545452 master-0 kubenswrapper[38936]: I0216 21:31:18.545429 38936 generic.go:334] "Generic (PLEG): container finished" podID="345387c1-fe5b-48c5-86f5-b21ed7619c0d" containerID="65576ffb1d2ce9c8aaea63a48bbd328ad12778f803de643fff75f50b796190e0" exitCode=0 Feb 16 21:31:18.545535 master-0 kubenswrapper[38936]: I0216 21:31:18.545465 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" event={"ID":"345387c1-fe5b-48c5-86f5-b21ed7619c0d","Type":"ContainerDied","Data":"65576ffb1d2ce9c8aaea63a48bbd328ad12778f803de643fff75f50b796190e0"} Feb 16 21:31:18.548938 master-0 kubenswrapper[38936]: I0216 21:31:18.548892 38936 generic.go:334] "Generic (PLEG): container finished" podID="b997c4b8-dbd3-44f3-8b30-2617a449f54d" containerID="aefb2ae92fe492b9e821be032a1176494af90c524eb8b665541b151d1784fa7a" exitCode=0 Feb 16 21:31:18.549105 master-0 kubenswrapper[38936]: I0216 21:31:18.548988 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" event={"ID":"b997c4b8-dbd3-44f3-8b30-2617a449f54d","Type":"ContainerDied","Data":"aefb2ae92fe492b9e821be032a1176494af90c524eb8b665541b151d1784fa7a"} Feb 16 21:31:20.117393 master-0 kubenswrapper[38936]: I0216 21:31:20.117351 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:20.124276 master-0 kubenswrapper[38936]: I0216 21:31:20.124222 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:20.129640 master-0 kubenswrapper[38936]: I0216 21:31:20.129595 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:20.277472 master-0 kubenswrapper[38936]: I0216 21:31:20.277399 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-bundle\") pod \"280362df-b3d3-406e-9f0e-e6f53aba3d53\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " Feb 16 21:31:20.277472 master-0 kubenswrapper[38936]: I0216 21:31:20.277465 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-bundle\") pod \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " Feb 16 21:31:20.277803 master-0 kubenswrapper[38936]: I0216 21:31:20.277541 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-bundle\") pod \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " Feb 16 21:31:20.277803 master-0 kubenswrapper[38936]: I0216 21:31:20.277586 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xhv4\" (UniqueName: \"kubernetes.io/projected/280362df-b3d3-406e-9f0e-e6f53aba3d53-kube-api-access-5xhv4\") pod \"280362df-b3d3-406e-9f0e-e6f53aba3d53\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " Feb 16 21:31:20.277803 master-0 kubenswrapper[38936]: I0216 21:31:20.277625 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l46gf\" (UniqueName: \"kubernetes.io/projected/345387c1-fe5b-48c5-86f5-b21ed7619c0d-kube-api-access-l46gf\") pod \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " Feb 16 21:31:20.277803 master-0 kubenswrapper[38936]: I0216 21:31:20.277676 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-util\") pod \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " Feb 16 21:31:20.278803 master-0 kubenswrapper[38936]: I0216 21:31:20.278767 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-bundle" (OuterVolumeSpecName: "bundle") pod "b997c4b8-dbd3-44f3-8b30-2617a449f54d" (UID: "b997c4b8-dbd3-44f3-8b30-2617a449f54d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:31:20.279054 master-0 kubenswrapper[38936]: I0216 21:31:20.279006 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-util\") pod \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\" (UID: \"345387c1-fe5b-48c5-86f5-b21ed7619c0d\") " Feb 16 21:31:20.279054 master-0 kubenswrapper[38936]: I0216 21:31:20.278959 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-bundle" (OuterVolumeSpecName: "bundle") pod "345387c1-fe5b-48c5-86f5-b21ed7619c0d" (UID: "345387c1-fe5b-48c5-86f5-b21ed7619c0d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:31:20.279157 master-0 kubenswrapper[38936]: I0216 21:31:20.279131 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hg4g\" (UniqueName: \"kubernetes.io/projected/b997c4b8-dbd3-44f3-8b30-2617a449f54d-kube-api-access-4hg4g\") pod \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\" (UID: \"b997c4b8-dbd3-44f3-8b30-2617a449f54d\") " Feb 16 21:31:20.279242 master-0 kubenswrapper[38936]: I0216 21:31:20.279224 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-util\") pod \"280362df-b3d3-406e-9f0e-e6f53aba3d53\" (UID: \"280362df-b3d3-406e-9f0e-e6f53aba3d53\") " Feb 16 21:31:20.279477 master-0 kubenswrapper[38936]: I0216 21:31:20.279419 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-bundle" (OuterVolumeSpecName: "bundle") pod "280362df-b3d3-406e-9f0e-e6f53aba3d53" (UID: "280362df-b3d3-406e-9f0e-e6f53aba3d53"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:31:20.280335 master-0 kubenswrapper[38936]: I0216 21:31:20.280297 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/280362df-b3d3-406e-9f0e-e6f53aba3d53-kube-api-access-5xhv4" (OuterVolumeSpecName: "kube-api-access-5xhv4") pod "280362df-b3d3-406e-9f0e-e6f53aba3d53" (UID: "280362df-b3d3-406e-9f0e-e6f53aba3d53"). InnerVolumeSpecName "kube-api-access-5xhv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:31:20.280917 master-0 kubenswrapper[38936]: I0216 21:31:20.280859 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345387c1-fe5b-48c5-86f5-b21ed7619c0d-kube-api-access-l46gf" (OuterVolumeSpecName: "kube-api-access-l46gf") pod "345387c1-fe5b-48c5-86f5-b21ed7619c0d" (UID: "345387c1-fe5b-48c5-86f5-b21ed7619c0d"). InnerVolumeSpecName "kube-api-access-l46gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:31:20.281282 master-0 kubenswrapper[38936]: I0216 21:31:20.281232 38936 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:20.281363 master-0 kubenswrapper[38936]: I0216 21:31:20.281337 38936 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:20.281552 master-0 kubenswrapper[38936]: I0216 21:31:20.281369 38936 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:20.281552 master-0 kubenswrapper[38936]: I0216 21:31:20.281454 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xhv4\" (UniqueName: \"kubernetes.io/projected/280362df-b3d3-406e-9f0e-e6f53aba3d53-kube-api-access-5xhv4\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:20.284050 master-0 kubenswrapper[38936]: I0216 21:31:20.283974 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b997c4b8-dbd3-44f3-8b30-2617a449f54d-kube-api-access-4hg4g" (OuterVolumeSpecName: "kube-api-access-4hg4g") pod "b997c4b8-dbd3-44f3-8b30-2617a449f54d" (UID: "b997c4b8-dbd3-44f3-8b30-2617a449f54d"). InnerVolumeSpecName "kube-api-access-4hg4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:31:20.290973 master-0 kubenswrapper[38936]: I0216 21:31:20.290677 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-util" (OuterVolumeSpecName: "util") pod "280362df-b3d3-406e-9f0e-e6f53aba3d53" (UID: "280362df-b3d3-406e-9f0e-e6f53aba3d53"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:31:20.293248 master-0 kubenswrapper[38936]: I0216 21:31:20.293168 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-util" (OuterVolumeSpecName: "util") pod "345387c1-fe5b-48c5-86f5-b21ed7619c0d" (UID: "345387c1-fe5b-48c5-86f5-b21ed7619c0d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:31:20.296163 master-0 kubenswrapper[38936]: I0216 21:31:20.296087 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-util" (OuterVolumeSpecName: "util") pod "b997c4b8-dbd3-44f3-8b30-2617a449f54d" (UID: "b997c4b8-dbd3-44f3-8b30-2617a449f54d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:31:20.383961 master-0 kubenswrapper[38936]: I0216 21:31:20.383745 38936 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/280362df-b3d3-406e-9f0e-e6f53aba3d53-util\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:20.383961 master-0 kubenswrapper[38936]: I0216 21:31:20.383831 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l46gf\" (UniqueName: \"kubernetes.io/projected/345387c1-fe5b-48c5-86f5-b21ed7619c0d-kube-api-access-l46gf\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:20.383961 master-0 kubenswrapper[38936]: I0216 21:31:20.383848 38936 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b997c4b8-dbd3-44f3-8b30-2617a449f54d-util\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:20.383961 master-0 kubenswrapper[38936]: I0216 21:31:20.383864 38936 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/345387c1-fe5b-48c5-86f5-b21ed7619c0d-util\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:20.383961 master-0 kubenswrapper[38936]: I0216 21:31:20.383901 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hg4g\" (UniqueName: \"kubernetes.io/projected/b997c4b8-dbd3-44f3-8b30-2617a449f54d-kube-api-access-4hg4g\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:20.572482 master-0 kubenswrapper[38936]: I0216 21:31:20.572407 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" event={"ID":"b997c4b8-dbd3-44f3-8b30-2617a449f54d","Type":"ContainerDied","Data":"03d651ec36ba79fc24b36196ae68a102bbf33ed71ad77aa95a4cde7d9e518402"} Feb 16 21:31:20.572482 master-0 kubenswrapper[38936]: I0216 21:31:20.572481 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d651ec36ba79fc24b36196ae68a102bbf33ed71ad77aa95a4cde7d9e518402" Feb 16 21:31:20.572783 master-0 kubenswrapper[38936]: I0216 21:31:20.572521 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj" Feb 16 21:31:20.575211 master-0 kubenswrapper[38936]: I0216 21:31:20.575166 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" event={"ID":"280362df-b3d3-406e-9f0e-e6f53aba3d53","Type":"ContainerDied","Data":"583129fd3b2d0955bdc7891c3ee71b32ba6e65e8d1d5bc0ce8c68b24cc3e54fc"} Feb 16 21:31:20.575311 master-0 kubenswrapper[38936]: I0216 21:31:20.575297 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="583129fd3b2d0955bdc7891c3ee71b32ba6e65e8d1d5bc0ce8c68b24cc3e54fc" Feb 16 21:31:20.575446 master-0 kubenswrapper[38936]: I0216 21:31:20.575434 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8" Feb 16 21:31:20.578990 master-0 kubenswrapper[38936]: I0216 21:31:20.578937 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" event={"ID":"345387c1-fe5b-48c5-86f5-b21ed7619c0d","Type":"ContainerDied","Data":"4ed735cb1c9d83257ce42484c23f1720473daffc3dfed3108aec8a2f14733e0f"} Feb 16 21:31:20.578990 master-0 kubenswrapper[38936]: I0216 21:31:20.578964 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed735cb1c9d83257ce42484c23f1720473daffc3dfed3108aec8a2f14733e0f" Feb 16 21:31:20.579100 master-0 kubenswrapper[38936]: I0216 21:31:20.579017 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.264455 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42"] Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: E0216 21:31:23.264789 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345387c1-fe5b-48c5-86f5-b21ed7619c0d" containerName="util" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.264803 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="345387c1-fe5b-48c5-86f5-b21ed7619c0d" containerName="util" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: E0216 21:31:23.264829 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345387c1-fe5b-48c5-86f5-b21ed7619c0d" containerName="pull" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.264836 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="345387c1-fe5b-48c5-86f5-b21ed7619c0d" containerName="pull" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: E0216 21:31:23.264861 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b997c4b8-dbd3-44f3-8b30-2617a449f54d" containerName="extract" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.264869 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b997c4b8-dbd3-44f3-8b30-2617a449f54d" containerName="extract" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: E0216 21:31:23.264881 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280362df-b3d3-406e-9f0e-e6f53aba3d53" containerName="util" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.264890 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="280362df-b3d3-406e-9f0e-e6f53aba3d53" containerName="util" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: E0216 21:31:23.264902 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280362df-b3d3-406e-9f0e-e6f53aba3d53" containerName="extract" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.264908 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="280362df-b3d3-406e-9f0e-e6f53aba3d53" containerName="extract" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: E0216 21:31:23.264922 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280362df-b3d3-406e-9f0e-e6f53aba3d53" containerName="pull" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.264927 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="280362df-b3d3-406e-9f0e-e6f53aba3d53" containerName="pull" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: E0216 21:31:23.264940 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b997c4b8-dbd3-44f3-8b30-2617a449f54d" containerName="pull" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.264945 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b997c4b8-dbd3-44f3-8b30-2617a449f54d" containerName="pull" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: E0216 21:31:23.264953 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b997c4b8-dbd3-44f3-8b30-2617a449f54d" containerName="util" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.264960 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b997c4b8-dbd3-44f3-8b30-2617a449f54d" containerName="util" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: E0216 21:31:23.264974 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345387c1-fe5b-48c5-86f5-b21ed7619c0d" containerName="extract" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.264982 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="345387c1-fe5b-48c5-86f5-b21ed7619c0d" containerName="extract" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.265116 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b997c4b8-dbd3-44f3-8b30-2617a449f54d" containerName="extract" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.265155 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="280362df-b3d3-406e-9f0e-e6f53aba3d53" containerName="extract" Feb 16 21:31:23.265839 master-0 kubenswrapper[38936]: I0216 21:31:23.265168 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="345387c1-fe5b-48c5-86f5-b21ed7619c0d" containerName="extract" Feb 16 21:31:23.271668 master-0 kubenswrapper[38936]: I0216 21:31:23.270394 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:23.276693 master-0 kubenswrapper[38936]: I0216 21:31:23.274821 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-w7b8n" Feb 16 21:31:23.282696 master-0 kubenswrapper[38936]: I0216 21:31:23.280851 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42"] Feb 16 21:31:23.444662 master-0 kubenswrapper[38936]: I0216 21:31:23.444516 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:23.444662 master-0 kubenswrapper[38936]: I0216 21:31:23.444667 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:23.444928 master-0 kubenswrapper[38936]: I0216 21:31:23.444720 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qdvl\" (UniqueName: \"kubernetes.io/projected/f2bf05aa-e1d4-4447-9eab-a1334c34380f-kube-api-access-4qdvl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:23.545883 master-0 kubenswrapper[38936]: I0216 21:31:23.545761 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:23.545883 master-0 kubenswrapper[38936]: I0216 21:31:23.545842 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:23.546333 master-0 kubenswrapper[38936]: I0216 21:31:23.546307 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:23.546394 master-0 kubenswrapper[38936]: I0216 21:31:23.546343 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qdvl\" (UniqueName: \"kubernetes.io/projected/f2bf05aa-e1d4-4447-9eab-a1334c34380f-kube-api-access-4qdvl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:23.546498 master-0 kubenswrapper[38936]: I0216 21:31:23.546457 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:23.573183 master-0 kubenswrapper[38936]: I0216 21:31:23.573136 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qdvl\" (UniqueName: \"kubernetes.io/projected/f2bf05aa-e1d4-4447-9eab-a1334c34380f-kube-api-access-4qdvl\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:23.593987 master-0 kubenswrapper[38936]: I0216 21:31:23.593926 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:24.100881 master-0 kubenswrapper[38936]: I0216 21:31:24.100837 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42"] Feb 16 21:31:24.611379 master-0 kubenswrapper[38936]: I0216 21:31:24.611239 38936 generic.go:334] "Generic (PLEG): container finished" podID="f2bf05aa-e1d4-4447-9eab-a1334c34380f" containerID="7eab0c8c9ee909ae45557ff99ef0298b9c00f747ccd078f4387233f4ed8ec705" exitCode=0 Feb 16 21:31:24.611379 master-0 kubenswrapper[38936]: I0216 21:31:24.611288 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" event={"ID":"f2bf05aa-e1d4-4447-9eab-a1334c34380f","Type":"ContainerDied","Data":"7eab0c8c9ee909ae45557ff99ef0298b9c00f747ccd078f4387233f4ed8ec705"} Feb 16 21:31:24.611379 master-0 kubenswrapper[38936]: I0216 21:31:24.611348 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" event={"ID":"f2bf05aa-e1d4-4447-9eab-a1334c34380f","Type":"ContainerStarted","Data":"11058bd3f17ded89469ae3c1ae764552cd9200bdca072a7d24a1268f8500f9dc"} Feb 16 21:31:26.124707 master-0 kubenswrapper[38936]: I0216 21:31:26.124640 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z"] Feb 16 21:31:26.132815 master-0 kubenswrapper[38936]: I0216 21:31:26.132739 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z" Feb 16 21:31:26.136998 master-0 kubenswrapper[38936]: I0216 21:31:26.136926 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 16 21:31:26.145715 master-0 kubenswrapper[38936]: I0216 21:31:26.145642 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 16 21:31:26.189517 master-0 kubenswrapper[38936]: I0216 21:31:26.160960 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z"] Feb 16 21:31:26.314878 master-0 kubenswrapper[38936]: I0216 21:31:26.314804 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtrhj\" (UniqueName: \"kubernetes.io/projected/f5e90fc3-2b9e-4196-bbc7-9ef3fb437869-kube-api-access-vtrhj\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rjr5z\" (UID: \"f5e90fc3-2b9e-4196-bbc7-9ef3fb437869\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z" Feb 16 21:31:26.315148 master-0 kubenswrapper[38936]: I0216 21:31:26.315007 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f5e90fc3-2b9e-4196-bbc7-9ef3fb437869-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rjr5z\" (UID: \"f5e90fc3-2b9e-4196-bbc7-9ef3fb437869\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z" Feb 16 21:31:26.416497 master-0 kubenswrapper[38936]: I0216 21:31:26.416418 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f5e90fc3-2b9e-4196-bbc7-9ef3fb437869-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rjr5z\" (UID: \"f5e90fc3-2b9e-4196-bbc7-9ef3fb437869\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z" Feb 16 21:31:26.416789 master-0 kubenswrapper[38936]: I0216 21:31:26.416639 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtrhj\" (UniqueName: \"kubernetes.io/projected/f5e90fc3-2b9e-4196-bbc7-9ef3fb437869-kube-api-access-vtrhj\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rjr5z\" (UID: \"f5e90fc3-2b9e-4196-bbc7-9ef3fb437869\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z" Feb 16 21:31:26.417098 master-0 kubenswrapper[38936]: I0216 21:31:26.417050 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f5e90fc3-2b9e-4196-bbc7-9ef3fb437869-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rjr5z\" (UID: \"f5e90fc3-2b9e-4196-bbc7-9ef3fb437869\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z" Feb 16 21:31:26.438931 master-0 kubenswrapper[38936]: I0216 21:31:26.438855 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtrhj\" (UniqueName: \"kubernetes.io/projected/f5e90fc3-2b9e-4196-bbc7-9ef3fb437869-kube-api-access-vtrhj\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rjr5z\" (UID: \"f5e90fc3-2b9e-4196-bbc7-9ef3fb437869\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z" Feb 16 21:31:26.495064 master-0 kubenswrapper[38936]: I0216 21:31:26.494998 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z" Feb 16 21:31:26.629828 master-0 kubenswrapper[38936]: I0216 21:31:26.629380 38936 generic.go:334] "Generic (PLEG): container finished" podID="f2bf05aa-e1d4-4447-9eab-a1334c34380f" containerID="92e8f4850cea4118600237c6ed87c862a7b83ae358e3f737a7650a326bc15d40" exitCode=0 Feb 16 21:31:26.629828 master-0 kubenswrapper[38936]: I0216 21:31:26.629437 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" event={"ID":"f2bf05aa-e1d4-4447-9eab-a1334c34380f","Type":"ContainerDied","Data":"92e8f4850cea4118600237c6ed87c862a7b83ae358e3f737a7650a326bc15d40"} Feb 16 21:31:26.938281 master-0 kubenswrapper[38936]: I0216 21:31:26.935375 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z"] Feb 16 21:31:27.637933 master-0 kubenswrapper[38936]: I0216 21:31:27.637857 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z" event={"ID":"f5e90fc3-2b9e-4196-bbc7-9ef3fb437869","Type":"ContainerStarted","Data":"2d7cfff7fc4bb70be8103028aca06806cbadaf8d24f0607cc9fe7c58a14d3c86"} Feb 16 21:31:27.640396 master-0 kubenswrapper[38936]: I0216 21:31:27.640356 38936 generic.go:334] "Generic (PLEG): container finished" podID="f2bf05aa-e1d4-4447-9eab-a1334c34380f" containerID="7acbaaefd304664827302afaa2e45f9fc4eee14e797147ce39187488f1f3306a" exitCode=0 Feb 16 21:31:27.640466 master-0 kubenswrapper[38936]: I0216 21:31:27.640406 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" event={"ID":"f2bf05aa-e1d4-4447-9eab-a1334c34380f","Type":"ContainerDied","Data":"7acbaaefd304664827302afaa2e45f9fc4eee14e797147ce39187488f1f3306a"} Feb 16 21:31:28.984356 master-0 kubenswrapper[38936]: I0216 21:31:28.984314 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:29.158332 master-0 kubenswrapper[38936]: I0216 21:31:29.158271 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qdvl\" (UniqueName: \"kubernetes.io/projected/f2bf05aa-e1d4-4447-9eab-a1334c34380f-kube-api-access-4qdvl\") pod \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " Feb 16 21:31:29.158596 master-0 kubenswrapper[38936]: I0216 21:31:29.158403 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-bundle\") pod \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " Feb 16 21:31:29.158596 master-0 kubenswrapper[38936]: I0216 21:31:29.158443 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-util\") pod \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\" (UID: \"f2bf05aa-e1d4-4447-9eab-a1334c34380f\") " Feb 16 21:31:29.162593 master-0 kubenswrapper[38936]: I0216 21:31:29.162518 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-bundle" (OuterVolumeSpecName: "bundle") pod "f2bf05aa-e1d4-4447-9eab-a1334c34380f" (UID: "f2bf05aa-e1d4-4447-9eab-a1334c34380f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:31:29.168836 master-0 kubenswrapper[38936]: I0216 21:31:29.163983 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bf05aa-e1d4-4447-9eab-a1334c34380f-kube-api-access-4qdvl" (OuterVolumeSpecName: "kube-api-access-4qdvl") pod "f2bf05aa-e1d4-4447-9eab-a1334c34380f" (UID: "f2bf05aa-e1d4-4447-9eab-a1334c34380f"). InnerVolumeSpecName "kube-api-access-4qdvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:31:29.174973 master-0 kubenswrapper[38936]: I0216 21:31:29.173887 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-util" (OuterVolumeSpecName: "util") pod "f2bf05aa-e1d4-4447-9eab-a1334c34380f" (UID: "f2bf05aa-e1d4-4447-9eab-a1334c34380f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:31:29.262085 master-0 kubenswrapper[38936]: I0216 21:31:29.261623 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qdvl\" (UniqueName: \"kubernetes.io/projected/f2bf05aa-e1d4-4447-9eab-a1334c34380f-kube-api-access-4qdvl\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:29.262085 master-0 kubenswrapper[38936]: I0216 21:31:29.261907 38936 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:29.262085 master-0 kubenswrapper[38936]: I0216 21:31:29.261925 38936 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2bf05aa-e1d4-4447-9eab-a1334c34380f-util\") on node \"master-0\" DevicePath \"\"" Feb 16 21:31:29.655641 master-0 kubenswrapper[38936]: I0216 21:31:29.655552 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" event={"ID":"f2bf05aa-e1d4-4447-9eab-a1334c34380f","Type":"ContainerDied","Data":"11058bd3f17ded89469ae3c1ae764552cd9200bdca072a7d24a1268f8500f9dc"} Feb 16 21:31:29.655641 master-0 kubenswrapper[38936]: I0216 21:31:29.655601 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11058bd3f17ded89469ae3c1ae764552cd9200bdca072a7d24a1268f8500f9dc" Feb 16 21:31:29.655641 master-0 kubenswrapper[38936]: I0216 21:31:29.655624 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42" Feb 16 21:31:30.666447 master-0 kubenswrapper[38936]: I0216 21:31:30.666306 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z" event={"ID":"f5e90fc3-2b9e-4196-bbc7-9ef3fb437869","Type":"ContainerStarted","Data":"d4abde44d772cf1f508718538bf236b823d3b9464f124841e79fee4056cc51fe"} Feb 16 21:31:30.693001 master-0 kubenswrapper[38936]: I0216 21:31:30.692821 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rjr5z" podStartSLOduration=1.2523606250000001 podStartE2EDuration="4.692794259s" podCreationTimestamp="2026-02-16 21:31:26 +0000 UTC" firstStartedPulling="2026-02-16 21:31:26.952756949 +0000 UTC m=+517.304760311" lastFinishedPulling="2026-02-16 21:31:30.393190583 +0000 UTC m=+520.745193945" observedRunningTime="2026-02-16 21:31:30.690296281 +0000 UTC m=+521.042299643" watchObservedRunningTime="2026-02-16 21:31:30.692794259 +0000 UTC m=+521.044797631" Feb 16 21:31:34.931309 master-0 kubenswrapper[38936]: I0216 21:31:34.931213 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-gxffr"] Feb 16 21:31:34.932148 master-0 kubenswrapper[38936]: E0216 21:31:34.931630 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bf05aa-e1d4-4447-9eab-a1334c34380f" containerName="extract" Feb 16 21:31:34.932148 master-0 kubenswrapper[38936]: I0216 21:31:34.931664 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bf05aa-e1d4-4447-9eab-a1334c34380f" containerName="extract" Feb 16 21:31:34.932148 master-0 kubenswrapper[38936]: E0216 21:31:34.931701 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bf05aa-e1d4-4447-9eab-a1334c34380f" containerName="util" Feb 16 21:31:34.932148 master-0 kubenswrapper[38936]: I0216 21:31:34.931712 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bf05aa-e1d4-4447-9eab-a1334c34380f" containerName="util" Feb 16 21:31:34.932148 master-0 kubenswrapper[38936]: E0216 21:31:34.931726 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bf05aa-e1d4-4447-9eab-a1334c34380f" containerName="pull" Feb 16 21:31:34.932148 master-0 kubenswrapper[38936]: I0216 21:31:34.931735 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bf05aa-e1d4-4447-9eab-a1334c34380f" containerName="pull" Feb 16 21:31:34.932148 master-0 kubenswrapper[38936]: I0216 21:31:34.931939 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bf05aa-e1d4-4447-9eab-a1334c34380f" containerName="extract" Feb 16 21:31:34.932605 master-0 kubenswrapper[38936]: I0216 21:31:34.932554 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" Feb 16 21:31:34.936630 master-0 kubenswrapper[38936]: I0216 21:31:34.936588 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 21:31:34.944676 master-0 kubenswrapper[38936]: I0216 21:31:34.943278 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 21:31:34.957468 master-0 kubenswrapper[38936]: I0216 21:31:34.957411 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-gxffr"] Feb 16 21:31:35.049168 master-0 kubenswrapper[38936]: I0216 21:31:35.049099 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-cjgt5"] Feb 16 21:31:35.049488 master-0 kubenswrapper[38936]: I0216 21:31:35.049428 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdkxh\" (UniqueName: \"kubernetes.io/projected/98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c-kube-api-access-hdkxh\") pod \"cert-manager-webhook-6888856db4-gxffr\" (UID: \"98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" Feb 16 21:31:35.049552 master-0 kubenswrapper[38936]: I0216 21:31:35.049508 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-gxffr\" (UID: \"98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" Feb 16 21:31:35.050055 master-0 kubenswrapper[38936]: I0216 21:31:35.050025 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-cjgt5" Feb 16 21:31:35.077220 master-0 kubenswrapper[38936]: I0216 21:31:35.077007 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-cjgt5"] Feb 16 21:31:35.151673 master-0 kubenswrapper[38936]: I0216 21:31:35.151201 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea8f3e98-8e0a-4053-9140-536f6ca70da4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-cjgt5\" (UID: \"ea8f3e98-8e0a-4053-9140-536f6ca70da4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-cjgt5" Feb 16 21:31:35.151673 master-0 kubenswrapper[38936]: I0216 21:31:35.151497 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg8b5\" (UniqueName: \"kubernetes.io/projected/ea8f3e98-8e0a-4053-9140-536f6ca70da4-kube-api-access-wg8b5\") pod \"cert-manager-cainjector-5545bd876-cjgt5\" (UID: \"ea8f3e98-8e0a-4053-9140-536f6ca70da4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-cjgt5" Feb 16 21:31:35.151673 master-0 kubenswrapper[38936]: I0216 21:31:35.151525 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdkxh\" (UniqueName: \"kubernetes.io/projected/98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c-kube-api-access-hdkxh\") pod \"cert-manager-webhook-6888856db4-gxffr\" (UID: \"98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" Feb 16 21:31:35.151673 master-0 kubenswrapper[38936]: I0216 21:31:35.151567 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-gxffr\" (UID: \"98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" Feb 16 21:31:35.168010 master-0 kubenswrapper[38936]: I0216 21:31:35.167761 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-gxffr\" (UID: \"98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" Feb 16 21:31:35.169595 master-0 kubenswrapper[38936]: I0216 21:31:35.169564 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdkxh\" (UniqueName: \"kubernetes.io/projected/98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c-kube-api-access-hdkxh\") pod \"cert-manager-webhook-6888856db4-gxffr\" (UID: \"98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" Feb 16 21:31:35.253463 master-0 kubenswrapper[38936]: I0216 21:31:35.253402 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea8f3e98-8e0a-4053-9140-536f6ca70da4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-cjgt5\" (UID: \"ea8f3e98-8e0a-4053-9140-536f6ca70da4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-cjgt5" Feb 16 21:31:35.253712 master-0 kubenswrapper[38936]: I0216 21:31:35.253508 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg8b5\" (UniqueName: \"kubernetes.io/projected/ea8f3e98-8e0a-4053-9140-536f6ca70da4-kube-api-access-wg8b5\") pod \"cert-manager-cainjector-5545bd876-cjgt5\" (UID: \"ea8f3e98-8e0a-4053-9140-536f6ca70da4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-cjgt5" Feb 16 21:31:35.255103 master-0 kubenswrapper[38936]: I0216 21:31:35.255046 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" Feb 16 21:31:35.271551 master-0 kubenswrapper[38936]: I0216 21:31:35.271508 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea8f3e98-8e0a-4053-9140-536f6ca70da4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-cjgt5\" (UID: \"ea8f3e98-8e0a-4053-9140-536f6ca70da4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-cjgt5" Feb 16 21:31:35.274038 master-0 kubenswrapper[38936]: I0216 21:31:35.273993 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg8b5\" (UniqueName: \"kubernetes.io/projected/ea8f3e98-8e0a-4053-9140-536f6ca70da4-kube-api-access-wg8b5\") pod \"cert-manager-cainjector-5545bd876-cjgt5\" (UID: \"ea8f3e98-8e0a-4053-9140-536f6ca70da4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-cjgt5" Feb 16 21:31:35.391314 master-0 kubenswrapper[38936]: I0216 21:31:35.391238 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-cjgt5" Feb 16 21:31:35.737823 master-0 kubenswrapper[38936]: I0216 21:31:35.737769 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-gxffr"] Feb 16 21:31:35.742087 master-0 kubenswrapper[38936]: W0216 21:31:35.741970 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98b3de8d_c5b1_4829_a2ff_1f024c1e1a5c.slice/crio-b0b110ef964465bdf620225335a9d5c3be05c6358cffccf657aec2fa7bad7f08 WatchSource:0}: Error finding container b0b110ef964465bdf620225335a9d5c3be05c6358cffccf657aec2fa7bad7f08: Status 404 returned error can't find the container with id b0b110ef964465bdf620225335a9d5c3be05c6358cffccf657aec2fa7bad7f08 Feb 16 21:31:35.865017 master-0 kubenswrapper[38936]: I0216 21:31:35.864955 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-cjgt5"] Feb 16 21:31:35.868143 master-0 kubenswrapper[38936]: W0216 21:31:35.868080 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea8f3e98_8e0a_4053_9140_536f6ca70da4.slice/crio-6d8abd0b620efaf55e25cdca7d266f3b27c929af516bcea3c9545b663e634bda WatchSource:0}: Error finding container 6d8abd0b620efaf55e25cdca7d266f3b27c929af516bcea3c9545b663e634bda: Status 404 returned error can't find the container with id 6d8abd0b620efaf55e25cdca7d266f3b27c929af516bcea3c9545b663e634bda Feb 16 21:31:36.724873 master-0 kubenswrapper[38936]: I0216 21:31:36.724800 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-cjgt5" event={"ID":"ea8f3e98-8e0a-4053-9140-536f6ca70da4","Type":"ContainerStarted","Data":"6d8abd0b620efaf55e25cdca7d266f3b27c929af516bcea3c9545b663e634bda"} Feb 16 21:31:36.725905 master-0 kubenswrapper[38936]: I0216 21:31:36.725873 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" event={"ID":"98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c","Type":"ContainerStarted","Data":"b0b110ef964465bdf620225335a9d5c3be05c6358cffccf657aec2fa7bad7f08"} Feb 16 21:31:37.761694 master-0 kubenswrapper[38936]: I0216 21:31:37.761319 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lcxlx"] Feb 16 21:31:37.762370 master-0 kubenswrapper[38936]: I0216 21:31:37.762336 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-lcxlx" Feb 16 21:31:37.764624 master-0 kubenswrapper[38936]: I0216 21:31:37.764576 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 21:31:37.764831 master-0 kubenswrapper[38936]: I0216 21:31:37.764807 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 21:31:37.798559 master-0 kubenswrapper[38936]: I0216 21:31:37.784893 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lcxlx"] Feb 16 21:31:37.894976 master-0 kubenswrapper[38936]: I0216 21:31:37.894916 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5ksr\" (UniqueName: \"kubernetes.io/projected/c3b9bcd3-0320-46d4-8de9-07bd5eb2c38b-kube-api-access-f5ksr\") pod \"nmstate-operator-694c9596b7-lcxlx\" (UID: \"c3b9bcd3-0320-46d4-8de9-07bd5eb2c38b\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lcxlx" Feb 16 21:31:37.996872 master-0 kubenswrapper[38936]: I0216 21:31:37.996805 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5ksr\" (UniqueName: \"kubernetes.io/projected/c3b9bcd3-0320-46d4-8de9-07bd5eb2c38b-kube-api-access-f5ksr\") pod \"nmstate-operator-694c9596b7-lcxlx\" (UID: \"c3b9bcd3-0320-46d4-8de9-07bd5eb2c38b\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lcxlx" Feb 16 21:31:38.029513 master-0 kubenswrapper[38936]: I0216 21:31:38.021662 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5ksr\" (UniqueName: \"kubernetes.io/projected/c3b9bcd3-0320-46d4-8de9-07bd5eb2c38b-kube-api-access-f5ksr\") pod \"nmstate-operator-694c9596b7-lcxlx\" (UID: \"c3b9bcd3-0320-46d4-8de9-07bd5eb2c38b\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-lcxlx" Feb 16 21:31:38.116739 master-0 kubenswrapper[38936]: I0216 21:31:38.116687 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-lcxlx" Feb 16 21:31:38.558835 master-0 kubenswrapper[38936]: I0216 21:31:38.558765 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-lcxlx"] Feb 16 21:31:38.572286 master-0 kubenswrapper[38936]: W0216 21:31:38.572226 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b9bcd3_0320_46d4_8de9_07bd5eb2c38b.slice/crio-7e241fcd7b1b9990171ea4da997379cec882711424ba39020bf943749a5a35c6 WatchSource:0}: Error finding container 7e241fcd7b1b9990171ea4da997379cec882711424ba39020bf943749a5a35c6: Status 404 returned error can't find the container with id 7e241fcd7b1b9990171ea4da997379cec882711424ba39020bf943749a5a35c6 Feb 16 21:31:38.760849 master-0 kubenswrapper[38936]: I0216 21:31:38.760761 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-lcxlx" event={"ID":"c3b9bcd3-0320-46d4-8de9-07bd5eb2c38b","Type":"ContainerStarted","Data":"7e241fcd7b1b9990171ea4da997379cec882711424ba39020bf943749a5a35c6"} Feb 16 21:31:41.789278 master-0 kubenswrapper[38936]: I0216 21:31:41.787962 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-cjgt5" event={"ID":"ea8f3e98-8e0a-4053-9140-536f6ca70da4","Type":"ContainerStarted","Data":"c5a049ab6ede9eb6cdc12c9905c8afd172dbe1b4794da11ce13fc07232f2276b"} Feb 16 21:31:41.790486 master-0 kubenswrapper[38936]: I0216 21:31:41.790415 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" event={"ID":"98b3de8d-c5b1-4829-a2ff-1f024c1e1a5c","Type":"ContainerStarted","Data":"25a682c20138816f728412d20af0f4cc2a2abdbfb07af0242c2d85a2a97ed8e1"} Feb 16 21:31:41.791813 master-0 kubenswrapper[38936]: I0216 21:31:41.791280 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" Feb 16 21:31:41.820564 master-0 kubenswrapper[38936]: I0216 21:31:41.819369 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-cjgt5" podStartSLOduration=1.461477136 podStartE2EDuration="6.819347147s" podCreationTimestamp="2026-02-16 21:31:35 +0000 UTC" firstStartedPulling="2026-02-16 21:31:35.870370082 +0000 UTC m=+526.222373454" lastFinishedPulling="2026-02-16 21:31:41.228240103 +0000 UTC m=+531.580243465" observedRunningTime="2026-02-16 21:31:41.810112496 +0000 UTC m=+532.162115858" watchObservedRunningTime="2026-02-16 21:31:41.819347147 +0000 UTC m=+532.171350509" Feb 16 21:31:41.900687 master-0 kubenswrapper[38936]: I0216 21:31:41.891846 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" podStartSLOduration=2.384328948 podStartE2EDuration="7.891804466s" podCreationTimestamp="2026-02-16 21:31:34 +0000 UTC" firstStartedPulling="2026-02-16 21:31:35.747267442 +0000 UTC m=+526.099270804" lastFinishedPulling="2026-02-16 21:31:41.25474296 +0000 UTC m=+531.606746322" observedRunningTime="2026-02-16 21:31:41.889772552 +0000 UTC m=+532.241775914" watchObservedRunningTime="2026-02-16 21:31:41.891804466 +0000 UTC m=+532.243807828" Feb 16 21:31:44.850678 master-0 kubenswrapper[38936]: I0216 21:31:44.850473 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-lcxlx" event={"ID":"c3b9bcd3-0320-46d4-8de9-07bd5eb2c38b","Type":"ContainerStarted","Data":"c758786ee76635bd3aecd865473d43559626e0d9dac6efb78ad488dab126c665"} Feb 16 21:31:44.883747 master-0 kubenswrapper[38936]: I0216 21:31:44.883571 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-lcxlx" podStartSLOduration=2.793905456 podStartE2EDuration="7.88354559s" podCreationTimestamp="2026-02-16 21:31:37 +0000 UTC" firstStartedPulling="2026-02-16 21:31:38.574256528 +0000 UTC m=+528.926259890" lastFinishedPulling="2026-02-16 21:31:43.663896672 +0000 UTC m=+534.015900024" observedRunningTime="2026-02-16 21:31:44.878082832 +0000 UTC m=+535.230086194" watchObservedRunningTime="2026-02-16 21:31:44.88354559 +0000 UTC m=+535.235548942" Feb 16 21:31:45.012785 master-0 kubenswrapper[38936]: I0216 21:31:45.012704 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-565c66c48f-6w268"] Feb 16 21:31:45.014315 master-0 kubenswrapper[38936]: I0216 21:31:45.013855 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:45.017299 master-0 kubenswrapper[38936]: I0216 21:31:45.017254 38936 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 21:31:45.017816 master-0 kubenswrapper[38936]: I0216 21:31:45.017767 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 21:31:45.018001 master-0 kubenswrapper[38936]: I0216 21:31:45.017956 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 21:31:45.018130 master-0 kubenswrapper[38936]: I0216 21:31:45.018097 38936 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 21:31:45.039949 master-0 kubenswrapper[38936]: I0216 21:31:45.039855 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-565c66c48f-6w268"] Feb 16 21:31:45.167821 master-0 kubenswrapper[38936]: I0216 21:31:45.166588 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef-apiservice-cert\") pod \"metallb-operator-controller-manager-565c66c48f-6w268\" (UID: \"b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef\") " pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:45.167821 master-0 kubenswrapper[38936]: I0216 21:31:45.166719 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef-webhook-cert\") pod \"metallb-operator-controller-manager-565c66c48f-6w268\" (UID: \"b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef\") " pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:45.167821 master-0 kubenswrapper[38936]: I0216 21:31:45.166741 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdjc5\" (UniqueName: \"kubernetes.io/projected/b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef-kube-api-access-sdjc5\") pod \"metallb-operator-controller-manager-565c66c48f-6w268\" (UID: \"b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef\") " pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:45.268518 master-0 kubenswrapper[38936]: I0216 21:31:45.268423 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef-webhook-cert\") pod \"metallb-operator-controller-manager-565c66c48f-6w268\" (UID: \"b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef\") " pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:45.268518 master-0 kubenswrapper[38936]: I0216 21:31:45.268482 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdjc5\" (UniqueName: \"kubernetes.io/projected/b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef-kube-api-access-sdjc5\") pod \"metallb-operator-controller-manager-565c66c48f-6w268\" (UID: \"b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef\") " pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:45.269002 master-0 kubenswrapper[38936]: I0216 21:31:45.268589 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef-apiservice-cert\") pod \"metallb-operator-controller-manager-565c66c48f-6w268\" (UID: \"b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef\") " pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:45.272495 master-0 kubenswrapper[38936]: I0216 21:31:45.272453 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef-apiservice-cert\") pod \"metallb-operator-controller-manager-565c66c48f-6w268\" (UID: \"b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef\") " pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:45.272762 master-0 kubenswrapper[38936]: I0216 21:31:45.272697 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef-webhook-cert\") pod \"metallb-operator-controller-manager-565c66c48f-6w268\" (UID: \"b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef\") " pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:45.791676 master-0 kubenswrapper[38936]: I0216 21:31:45.787961 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdjc5\" (UniqueName: \"kubernetes.io/projected/b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef-kube-api-access-sdjc5\") pod \"metallb-operator-controller-manager-565c66c48f-6w268\" (UID: \"b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef\") " pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:45.929430 master-0 kubenswrapper[38936]: I0216 21:31:45.929353 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:46.132685 master-0 kubenswrapper[38936]: I0216 21:31:46.112063 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-xk5kv"] Feb 16 21:31:46.134706 master-0 kubenswrapper[38936]: I0216 21:31:46.134635 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-xk5kv" Feb 16 21:31:46.228201 master-0 kubenswrapper[38936]: I0216 21:31:46.228105 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-xk5kv"] Feb 16 21:31:46.312457 master-0 kubenswrapper[38936]: I0216 21:31:46.312366 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7cee43ed-c371-40cc-82aa-f3aaa783ae5a-bound-sa-token\") pod \"cert-manager-545d4d4674-xk5kv\" (UID: \"7cee43ed-c371-40cc-82aa-f3aaa783ae5a\") " pod="cert-manager/cert-manager-545d4d4674-xk5kv" Feb 16 21:31:46.312457 master-0 kubenswrapper[38936]: I0216 21:31:46.312449 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp8jj\" (UniqueName: \"kubernetes.io/projected/7cee43ed-c371-40cc-82aa-f3aaa783ae5a-kube-api-access-mp8jj\") pod \"cert-manager-545d4d4674-xk5kv\" (UID: \"7cee43ed-c371-40cc-82aa-f3aaa783ae5a\") " pod="cert-manager/cert-manager-545d4d4674-xk5kv" Feb 16 21:31:46.419317 master-0 kubenswrapper[38936]: I0216 21:31:46.418612 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7cee43ed-c371-40cc-82aa-f3aaa783ae5a-bound-sa-token\") pod \"cert-manager-545d4d4674-xk5kv\" (UID: \"7cee43ed-c371-40cc-82aa-f3aaa783ae5a\") " pod="cert-manager/cert-manager-545d4d4674-xk5kv" Feb 16 21:31:46.419317 master-0 kubenswrapper[38936]: I0216 21:31:46.418733 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp8jj\" (UniqueName: \"kubernetes.io/projected/7cee43ed-c371-40cc-82aa-f3aaa783ae5a-kube-api-access-mp8jj\") pod \"cert-manager-545d4d4674-xk5kv\" (UID: \"7cee43ed-c371-40cc-82aa-f3aaa783ae5a\") " pod="cert-manager/cert-manager-545d4d4674-xk5kv" Feb 16 21:31:46.478564 master-0 kubenswrapper[38936]: I0216 21:31:46.475437 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp8jj\" (UniqueName: \"kubernetes.io/projected/7cee43ed-c371-40cc-82aa-f3aaa783ae5a-kube-api-access-mp8jj\") pod \"cert-manager-545d4d4674-xk5kv\" (UID: \"7cee43ed-c371-40cc-82aa-f3aaa783ae5a\") " pod="cert-manager/cert-manager-545d4d4674-xk5kv" Feb 16 21:31:46.486295 master-0 kubenswrapper[38936]: I0216 21:31:46.486182 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7cee43ed-c371-40cc-82aa-f3aaa783ae5a-bound-sa-token\") pod \"cert-manager-545d4d4674-xk5kv\" (UID: \"7cee43ed-c371-40cc-82aa-f3aaa783ae5a\") " pod="cert-manager/cert-manager-545d4d4674-xk5kv" Feb 16 21:31:46.610160 master-0 kubenswrapper[38936]: I0216 21:31:46.609995 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-xk5kv" Feb 16 21:31:46.625681 master-0 kubenswrapper[38936]: I0216 21:31:46.624618 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-cc569959-rrghc"] Feb 16 21:31:46.625681 master-0 kubenswrapper[38936]: I0216 21:31:46.625629 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:46.673295 master-0 kubenswrapper[38936]: I0216 21:31:46.673240 38936 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 21:31:46.674567 master-0 kubenswrapper[38936]: I0216 21:31:46.673460 38936 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 21:31:46.687421 master-0 kubenswrapper[38936]: I0216 21:31:46.687344 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cc569959-rrghc"] Feb 16 21:31:46.712357 master-0 kubenswrapper[38936]: I0216 21:31:46.712292 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-565c66c48f-6w268"] Feb 16 21:31:46.751692 master-0 kubenswrapper[38936]: I0216 21:31:46.751562 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc1bf574-b727-4d45-a63d-96c563bf046e-webhook-cert\") pod \"metallb-operator-webhook-server-cc569959-rrghc\" (UID: \"bc1bf574-b727-4d45-a63d-96c563bf046e\") " pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:46.751692 master-0 kubenswrapper[38936]: I0216 21:31:46.751623 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwgz\" (UniqueName: \"kubernetes.io/projected/bc1bf574-b727-4d45-a63d-96c563bf046e-kube-api-access-jdwgz\") pod \"metallb-operator-webhook-server-cc569959-rrghc\" (UID: \"bc1bf574-b727-4d45-a63d-96c563bf046e\") " pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:46.751869 master-0 kubenswrapper[38936]: I0216 21:31:46.751727 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc1bf574-b727-4d45-a63d-96c563bf046e-apiservice-cert\") pod \"metallb-operator-webhook-server-cc569959-rrghc\" (UID: \"bc1bf574-b727-4d45-a63d-96c563bf046e\") " pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:46.869705 master-0 kubenswrapper[38936]: I0216 21:31:46.869354 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc1bf574-b727-4d45-a63d-96c563bf046e-webhook-cert\") pod \"metallb-operator-webhook-server-cc569959-rrghc\" (UID: \"bc1bf574-b727-4d45-a63d-96c563bf046e\") " pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:46.869705 master-0 kubenswrapper[38936]: I0216 21:31:46.869430 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdwgz\" (UniqueName: \"kubernetes.io/projected/bc1bf574-b727-4d45-a63d-96c563bf046e-kube-api-access-jdwgz\") pod \"metallb-operator-webhook-server-cc569959-rrghc\" (UID: \"bc1bf574-b727-4d45-a63d-96c563bf046e\") " pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:46.869705 master-0 kubenswrapper[38936]: I0216 21:31:46.869559 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc1bf574-b727-4d45-a63d-96c563bf046e-apiservice-cert\") pod \"metallb-operator-webhook-server-cc569959-rrghc\" (UID: \"bc1bf574-b727-4d45-a63d-96c563bf046e\") " pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:46.891563 master-0 kubenswrapper[38936]: I0216 21:31:46.884457 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc1bf574-b727-4d45-a63d-96c563bf046e-webhook-cert\") pod \"metallb-operator-webhook-server-cc569959-rrghc\" (UID: \"bc1bf574-b727-4d45-a63d-96c563bf046e\") " pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:46.893852 master-0 kubenswrapper[38936]: I0216 21:31:46.892763 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc1bf574-b727-4d45-a63d-96c563bf046e-apiservice-cert\") pod \"metallb-operator-webhook-server-cc569959-rrghc\" (UID: \"bc1bf574-b727-4d45-a63d-96c563bf046e\") " pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:46.904520 master-0 kubenswrapper[38936]: I0216 21:31:46.900245 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" event={"ID":"b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef","Type":"ContainerStarted","Data":"00a07e795d3d3a1d70e35c923b2f7331ac6f123a759871a9fee04e0b0e751a0a"} Feb 16 21:31:46.930891 master-0 kubenswrapper[38936]: I0216 21:31:46.930831 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdwgz\" (UniqueName: \"kubernetes.io/projected/bc1bf574-b727-4d45-a63d-96c563bf046e-kube-api-access-jdwgz\") pod \"metallb-operator-webhook-server-cc569959-rrghc\" (UID: \"bc1bf574-b727-4d45-a63d-96c563bf046e\") " pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:47.006033 master-0 kubenswrapper[38936]: I0216 21:31:47.005729 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:47.402432 master-0 kubenswrapper[38936]: I0216 21:31:47.400592 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-xk5kv"] Feb 16 21:31:47.678656 master-0 kubenswrapper[38936]: I0216 21:31:47.678585 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cc569959-rrghc"] Feb 16 21:31:47.689283 master-0 kubenswrapper[38936]: W0216 21:31:47.689198 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc1bf574_b727_4d45_a63d_96c563bf046e.slice/crio-d40f7c1161ac8888c5cd7f8452ebf641ac5640daf0f171b9e927e06d33dcab03 WatchSource:0}: Error finding container d40f7c1161ac8888c5cd7f8452ebf641ac5640daf0f171b9e927e06d33dcab03: Status 404 returned error can't find the container with id d40f7c1161ac8888c5cd7f8452ebf641ac5640daf0f171b9e927e06d33dcab03 Feb 16 21:31:47.910130 master-0 kubenswrapper[38936]: I0216 21:31:47.909941 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" event={"ID":"bc1bf574-b727-4d45-a63d-96c563bf046e","Type":"ContainerStarted","Data":"d40f7c1161ac8888c5cd7f8452ebf641ac5640daf0f171b9e927e06d33dcab03"} Feb 16 21:31:47.911855 master-0 kubenswrapper[38936]: I0216 21:31:47.911803 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-xk5kv" event={"ID":"7cee43ed-c371-40cc-82aa-f3aaa783ae5a","Type":"ContainerStarted","Data":"677e8780ac19adabbd73d85cb859c21af1e600e9049682d031f082697fff7284"} Feb 16 21:31:47.911855 master-0 kubenswrapper[38936]: I0216 21:31:47.911855 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-xk5kv" event={"ID":"7cee43ed-c371-40cc-82aa-f3aaa783ae5a","Type":"ContainerStarted","Data":"dfa19d9ad7cb8a1293af76d6c3eaf86cee76d0ab31a5c3410c4b279240b4c431"} Feb 16 21:31:47.956795 master-0 kubenswrapper[38936]: I0216 21:31:47.956719 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-xk5kv" podStartSLOduration=2.9566992069999998 podStartE2EDuration="2.956699207s" podCreationTimestamp="2026-02-16 21:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:31:47.9549556 +0000 UTC m=+538.306958962" watchObservedRunningTime="2026-02-16 21:31:47.956699207 +0000 UTC m=+538.308702569" Feb 16 21:31:50.265775 master-0 kubenswrapper[38936]: I0216 21:31:50.265343 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-gxffr" Feb 16 21:31:53.555476 master-0 kubenswrapper[38936]: I0216 21:31:53.549592 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf"] Feb 16 21:31:53.558056 master-0 kubenswrapper[38936]: I0216 21:31:53.558024 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf" Feb 16 21:31:53.561019 master-0 kubenswrapper[38936]: I0216 21:31:53.560758 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 21:31:53.561095 master-0 kubenswrapper[38936]: I0216 21:31:53.561029 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 21:31:53.580004 master-0 kubenswrapper[38936]: I0216 21:31:53.579938 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf"] Feb 16 21:31:53.672524 master-0 kubenswrapper[38936]: I0216 21:31:53.672468 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh"] Feb 16 21:31:53.673425 master-0 kubenswrapper[38936]: I0216 21:31:53.673405 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh" Feb 16 21:31:53.683030 master-0 kubenswrapper[38936]: I0216 21:31:53.682799 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 21:31:53.687909 master-0 kubenswrapper[38936]: I0216 21:31:53.687829 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfgnh\" (UniqueName: \"kubernetes.io/projected/2ea6b6f8-3caa-490a-a287-a4879620bc3f-kube-api-access-wfgnh\") pod \"obo-prometheus-operator-68bc856cb9-fb7lf\" (UID: \"2ea6b6f8-3caa-490a-a287-a4879620bc3f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf" Feb 16 21:31:53.701390 master-0 kubenswrapper[38936]: I0216 21:31:53.701303 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp"] Feb 16 21:31:53.703375 master-0 kubenswrapper[38936]: I0216 21:31:53.702788 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp" Feb 16 21:31:53.724376 master-0 kubenswrapper[38936]: I0216 21:31:53.724325 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh"] Feb 16 21:31:53.739203 master-0 kubenswrapper[38936]: I0216 21:31:53.739138 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp"] Feb 16 21:31:53.788903 master-0 kubenswrapper[38936]: I0216 21:31:53.788838 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfgnh\" (UniqueName: \"kubernetes.io/projected/2ea6b6f8-3caa-490a-a287-a4879620bc3f-kube-api-access-wfgnh\") pod \"obo-prometheus-operator-68bc856cb9-fb7lf\" (UID: \"2ea6b6f8-3caa-490a-a287-a4879620bc3f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf" Feb 16 21:31:53.789146 master-0 kubenswrapper[38936]: I0216 21:31:53.788936 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7f47676-5b56-4ae9-814a-fbff90fbc497-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp\" (UID: \"a7f47676-5b56-4ae9-814a-fbff90fbc497\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp" Feb 16 21:31:53.789146 master-0 kubenswrapper[38936]: I0216 21:31:53.789000 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c73aa8f-b221-423a-95c2-d3a255335f5d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh\" (UID: \"5c73aa8f-b221-423a-95c2-d3a255335f5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh" Feb 16 21:31:53.789146 master-0 kubenswrapper[38936]: I0216 21:31:53.789030 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c73aa8f-b221-423a-95c2-d3a255335f5d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh\" (UID: \"5c73aa8f-b221-423a-95c2-d3a255335f5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh" Feb 16 21:31:53.789146 master-0 kubenswrapper[38936]: I0216 21:31:53.789054 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7f47676-5b56-4ae9-814a-fbff90fbc497-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp\" (UID: \"a7f47676-5b56-4ae9-814a-fbff90fbc497\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp" Feb 16 21:31:53.828263 master-0 kubenswrapper[38936]: I0216 21:31:53.828142 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfgnh\" (UniqueName: \"kubernetes.io/projected/2ea6b6f8-3caa-490a-a287-a4879620bc3f-kube-api-access-wfgnh\") pod \"obo-prometheus-operator-68bc856cb9-fb7lf\" (UID: \"2ea6b6f8-3caa-490a-a287-a4879620bc3f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf" Feb 16 21:31:53.849554 master-0 kubenswrapper[38936]: I0216 21:31:53.848791 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6zqfb"] Feb 16 21:31:53.849904 master-0 kubenswrapper[38936]: I0216 21:31:53.849862 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" Feb 16 21:31:53.852859 master-0 kubenswrapper[38936]: I0216 21:31:53.852316 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 21:31:53.895445 master-0 kubenswrapper[38936]: I0216 21:31:53.892391 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7f47676-5b56-4ae9-814a-fbff90fbc497-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp\" (UID: \"a7f47676-5b56-4ae9-814a-fbff90fbc497\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp" Feb 16 21:31:53.895445 master-0 kubenswrapper[38936]: I0216 21:31:53.892470 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab3a497-bb0b-422a-bf01-570e4aceb014-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6zqfb\" (UID: \"8ab3a497-bb0b-422a-bf01-570e4aceb014\") " pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" Feb 16 21:31:53.895445 master-0 kubenswrapper[38936]: I0216 21:31:53.892526 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkpnl\" (UniqueName: \"kubernetes.io/projected/8ab3a497-bb0b-422a-bf01-570e4aceb014-kube-api-access-vkpnl\") pod \"observability-operator-59bdc8b94-6zqfb\" (UID: \"8ab3a497-bb0b-422a-bf01-570e4aceb014\") " pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" Feb 16 21:31:53.895445 master-0 kubenswrapper[38936]: I0216 21:31:53.892574 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7f47676-5b56-4ae9-814a-fbff90fbc497-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp\" (UID: \"a7f47676-5b56-4ae9-814a-fbff90fbc497\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp" Feb 16 21:31:53.895445 master-0 kubenswrapper[38936]: I0216 21:31:53.892688 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c73aa8f-b221-423a-95c2-d3a255335f5d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh\" (UID: \"5c73aa8f-b221-423a-95c2-d3a255335f5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh" Feb 16 21:31:53.895445 master-0 kubenswrapper[38936]: I0216 21:31:53.892725 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c73aa8f-b221-423a-95c2-d3a255335f5d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh\" (UID: \"5c73aa8f-b221-423a-95c2-d3a255335f5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh" Feb 16 21:31:53.899852 master-0 kubenswrapper[38936]: I0216 21:31:53.899810 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6zqfb"] Feb 16 21:31:53.900280 master-0 kubenswrapper[38936]: I0216 21:31:53.900242 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7f47676-5b56-4ae9-814a-fbff90fbc497-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp\" (UID: \"a7f47676-5b56-4ae9-814a-fbff90fbc497\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp" Feb 16 21:31:53.900704 master-0 kubenswrapper[38936]: I0216 21:31:53.900672 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7f47676-5b56-4ae9-814a-fbff90fbc497-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp\" (UID: \"a7f47676-5b56-4ae9-814a-fbff90fbc497\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp" Feb 16 21:31:53.902158 master-0 kubenswrapper[38936]: I0216 21:31:53.902098 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf" Feb 16 21:31:53.916135 master-0 kubenswrapper[38936]: I0216 21:31:53.909445 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c73aa8f-b221-423a-95c2-d3a255335f5d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh\" (UID: \"5c73aa8f-b221-423a-95c2-d3a255335f5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh" Feb 16 21:31:53.916135 master-0 kubenswrapper[38936]: I0216 21:31:53.910208 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c73aa8f-b221-423a-95c2-d3a255335f5d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh\" (UID: \"5c73aa8f-b221-423a-95c2-d3a255335f5d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh" Feb 16 21:31:53.994671 master-0 kubenswrapper[38936]: I0216 21:31:53.993995 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab3a497-bb0b-422a-bf01-570e4aceb014-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6zqfb\" (UID: \"8ab3a497-bb0b-422a-bf01-570e4aceb014\") " pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" Feb 16 21:31:53.994671 master-0 kubenswrapper[38936]: I0216 21:31:53.994065 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkpnl\" (UniqueName: \"kubernetes.io/projected/8ab3a497-bb0b-422a-bf01-570e4aceb014-kube-api-access-vkpnl\") pod \"observability-operator-59bdc8b94-6zqfb\" (UID: \"8ab3a497-bb0b-422a-bf01-570e4aceb014\") " pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" Feb 16 21:31:53.998900 master-0 kubenswrapper[38936]: I0216 21:31:53.996935 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh" Feb 16 21:31:54.002467 master-0 kubenswrapper[38936]: I0216 21:31:54.002426 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab3a497-bb0b-422a-bf01-570e4aceb014-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6zqfb\" (UID: \"8ab3a497-bb0b-422a-bf01-570e4aceb014\") " pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" Feb 16 21:31:54.019685 master-0 kubenswrapper[38936]: I0216 21:31:54.013669 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkpnl\" (UniqueName: \"kubernetes.io/projected/8ab3a497-bb0b-422a-bf01-570e4aceb014-kube-api-access-vkpnl\") pod \"observability-operator-59bdc8b94-6zqfb\" (UID: \"8ab3a497-bb0b-422a-bf01-570e4aceb014\") " pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" Feb 16 21:31:54.046946 master-0 kubenswrapper[38936]: I0216 21:31:54.045117 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-55r4l"] Feb 16 21:31:54.047036 master-0 kubenswrapper[38936]: I0216 21:31:54.046970 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-55r4l" Feb 16 21:31:54.077693 master-0 kubenswrapper[38936]: I0216 21:31:54.072770 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp" Feb 16 21:31:54.116200 master-0 kubenswrapper[38936]: I0216 21:31:54.091068 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-55r4l"] Feb 16 21:31:54.116200 master-0 kubenswrapper[38936]: I0216 21:31:54.101006 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rgw8\" (UniqueName: \"kubernetes.io/projected/5d6a3421-3d34-4ac9-8ea4-469558dd0aba-kube-api-access-7rgw8\") pod \"perses-operator-5bf474d74f-55r4l\" (UID: \"5d6a3421-3d34-4ac9-8ea4-469558dd0aba\") " pod="openshift-operators/perses-operator-5bf474d74f-55r4l" Feb 16 21:31:54.116200 master-0 kubenswrapper[38936]: I0216 21:31:54.101150 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d6a3421-3d34-4ac9-8ea4-469558dd0aba-openshift-service-ca\") pod \"perses-operator-5bf474d74f-55r4l\" (UID: \"5d6a3421-3d34-4ac9-8ea4-469558dd0aba\") " pod="openshift-operators/perses-operator-5bf474d74f-55r4l" Feb 16 21:31:54.116200 master-0 kubenswrapper[38936]: I0216 21:31:54.103362 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" Feb 16 21:31:54.204052 master-0 kubenswrapper[38936]: I0216 21:31:54.203359 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rgw8\" (UniqueName: \"kubernetes.io/projected/5d6a3421-3d34-4ac9-8ea4-469558dd0aba-kube-api-access-7rgw8\") pod \"perses-operator-5bf474d74f-55r4l\" (UID: \"5d6a3421-3d34-4ac9-8ea4-469558dd0aba\") " pod="openshift-operators/perses-operator-5bf474d74f-55r4l" Feb 16 21:31:54.204052 master-0 kubenswrapper[38936]: I0216 21:31:54.203471 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d6a3421-3d34-4ac9-8ea4-469558dd0aba-openshift-service-ca\") pod \"perses-operator-5bf474d74f-55r4l\" (UID: \"5d6a3421-3d34-4ac9-8ea4-469558dd0aba\") " pod="openshift-operators/perses-operator-5bf474d74f-55r4l" Feb 16 21:31:54.204692 master-0 kubenswrapper[38936]: I0216 21:31:54.204433 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d6a3421-3d34-4ac9-8ea4-469558dd0aba-openshift-service-ca\") pod \"perses-operator-5bf474d74f-55r4l\" (UID: \"5d6a3421-3d34-4ac9-8ea4-469558dd0aba\") " pod="openshift-operators/perses-operator-5bf474d74f-55r4l" Feb 16 21:31:54.233784 master-0 kubenswrapper[38936]: I0216 21:31:54.232419 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rgw8\" (UniqueName: \"kubernetes.io/projected/5d6a3421-3d34-4ac9-8ea4-469558dd0aba-kube-api-access-7rgw8\") pod \"perses-operator-5bf474d74f-55r4l\" (UID: \"5d6a3421-3d34-4ac9-8ea4-469558dd0aba\") " pod="openshift-operators/perses-operator-5bf474d74f-55r4l" Feb 16 21:31:54.422783 master-0 kubenswrapper[38936]: I0216 21:31:54.420280 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-55r4l" Feb 16 21:31:54.522822 master-0 kubenswrapper[38936]: I0216 21:31:54.519932 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh"] Feb 16 21:31:54.528366 master-0 kubenswrapper[38936]: I0216 21:31:54.526291 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf"] Feb 16 21:31:54.534824 master-0 kubenswrapper[38936]: W0216 21:31:54.531794 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c73aa8f_b221_423a_95c2_d3a255335f5d.slice/crio-80637f760c3642debd76948857fa59559104cca8555dd22af94865d3a9f4f99d WatchSource:0}: Error finding container 80637f760c3642debd76948857fa59559104cca8555dd22af94865d3a9f4f99d: Status 404 returned error can't find the container with id 80637f760c3642debd76948857fa59559104cca8555dd22af94865d3a9f4f99d Feb 16 21:31:54.537966 master-0 kubenswrapper[38936]: W0216 21:31:54.537633 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ea6b6f8_3caa_490a_a287_a4879620bc3f.slice/crio-124aa4de740f526fe80d24293663e70dd9c023a73921009725694cc2005f8cdc WatchSource:0}: Error finding container 124aa4de740f526fe80d24293663e70dd9c023a73921009725694cc2005f8cdc: Status 404 returned error can't find the container with id 124aa4de740f526fe80d24293663e70dd9c023a73921009725694cc2005f8cdc Feb 16 21:31:54.821311 master-0 kubenswrapper[38936]: I0216 21:31:54.821203 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp"] Feb 16 21:31:54.943702 master-0 kubenswrapper[38936]: I0216 21:31:54.943618 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6zqfb"] Feb 16 21:31:54.996805 master-0 kubenswrapper[38936]: I0216 21:31:54.996478 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh" event={"ID":"5c73aa8f-b221-423a-95c2-d3a255335f5d","Type":"ContainerStarted","Data":"80637f760c3642debd76948857fa59559104cca8555dd22af94865d3a9f4f99d"} Feb 16 21:31:55.010680 master-0 kubenswrapper[38936]: I0216 21:31:55.009773 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-55r4l"] Feb 16 21:31:55.010680 master-0 kubenswrapper[38936]: I0216 21:31:55.010503 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" event={"ID":"b52f6568-c0a4-4f4d-a9f2-c2ec62dfb3ef","Type":"ContainerStarted","Data":"22fef4fe665d1d7e437bd017068e287b46464ff5cc71c448907809ef854c901e"} Feb 16 21:31:55.010680 master-0 kubenswrapper[38936]: I0216 21:31:55.010608 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:31:55.014766 master-0 kubenswrapper[38936]: I0216 21:31:55.012351 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp" event={"ID":"a7f47676-5b56-4ae9-814a-fbff90fbc497","Type":"ContainerStarted","Data":"a55bf5c45805d98faaa97bffa55264f419d2d8310a6006b991b9ae33d650cc78"} Feb 16 21:31:55.014766 master-0 kubenswrapper[38936]: I0216 21:31:55.014146 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" event={"ID":"bc1bf574-b727-4d45-a63d-96c563bf046e","Type":"ContainerStarted","Data":"e9022fe9999ce52848a9110da2c7096eb4f4cfd956ae019a12f737674e9181de"} Feb 16 21:31:55.014766 master-0 kubenswrapper[38936]: I0216 21:31:55.014259 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:31:55.018671 master-0 kubenswrapper[38936]: I0216 21:31:55.015308 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf" event={"ID":"2ea6b6f8-3caa-490a-a287-a4879620bc3f","Type":"ContainerStarted","Data":"124aa4de740f526fe80d24293663e70dd9c023a73921009725694cc2005f8cdc"} Feb 16 21:31:55.018671 master-0 kubenswrapper[38936]: I0216 21:31:55.016180 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" event={"ID":"8ab3a497-bb0b-422a-bf01-570e4aceb014","Type":"ContainerStarted","Data":"67bd30b0e4f9d9f386e63949443392bc211276d91c5e9851909630db259f6942"} Feb 16 21:31:55.052683 master-0 kubenswrapper[38936]: I0216 21:31:55.052571 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" podStartSLOduration=4.185699775 podStartE2EDuration="11.052556543s" podCreationTimestamp="2026-02-16 21:31:44 +0000 UTC" firstStartedPulling="2026-02-16 21:31:46.747167952 +0000 UTC m=+537.099171314" lastFinishedPulling="2026-02-16 21:31:53.614024719 +0000 UTC m=+543.966028082" observedRunningTime="2026-02-16 21:31:55.048212534 +0000 UTC m=+545.400215886" watchObservedRunningTime="2026-02-16 21:31:55.052556543 +0000 UTC m=+545.404559905" Feb 16 21:31:56.034857 master-0 kubenswrapper[38936]: I0216 21:31:56.034404 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-55r4l" event={"ID":"5d6a3421-3d34-4ac9-8ea4-469558dd0aba","Type":"ContainerStarted","Data":"9fad3caa029b023749920ffada1504767c2c4756e0315cf2c33855d4e6aedc84"} Feb 16 21:31:59.929529 master-0 kubenswrapper[38936]: I0216 21:31:59.929399 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" podStartSLOduration=7.970766074 podStartE2EDuration="13.929371942s" podCreationTimestamp="2026-02-16 21:31:46 +0000 UTC" firstStartedPulling="2026-02-16 21:31:47.693300921 +0000 UTC m=+538.045304283" lastFinishedPulling="2026-02-16 21:31:53.651906789 +0000 UTC m=+544.003910151" observedRunningTime="2026-02-16 21:31:55.07464239 +0000 UTC m=+545.426645752" watchObservedRunningTime="2026-02-16 21:31:59.929371942 +0000 UTC m=+550.281375304" Feb 16 21:32:07.032620 master-0 kubenswrapper[38936]: I0216 21:32:07.032556 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-cc569959-rrghc" Feb 16 21:32:08.227306 master-0 kubenswrapper[38936]: I0216 21:32:08.227232 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh" event={"ID":"5c73aa8f-b221-423a-95c2-d3a255335f5d","Type":"ContainerStarted","Data":"c8907fc8da67647af79d61215460827c78e1f57e544a393db00aef467b587066"} Feb 16 21:32:08.231304 master-0 kubenswrapper[38936]: I0216 21:32:08.231232 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp" event={"ID":"a7f47676-5b56-4ae9-814a-fbff90fbc497","Type":"ContainerStarted","Data":"5ea451a1fd34681acaa038f12b533d8f4f8eedf65d7c18f8d9aa0bc1af9836a6"} Feb 16 21:32:08.240717 master-0 kubenswrapper[38936]: I0216 21:32:08.234782 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf" event={"ID":"2ea6b6f8-3caa-490a-a287-a4879620bc3f","Type":"ContainerStarted","Data":"9a84faeaeda2183ddf3fe5e7f09776bb6cc0f5cd420fd5bc8c992951fdb01b2d"} Feb 16 21:32:08.240717 master-0 kubenswrapper[38936]: I0216 21:32:08.237759 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-55r4l" event={"ID":"5d6a3421-3d34-4ac9-8ea4-469558dd0aba","Type":"ContainerStarted","Data":"d7472fe4769d05b3c2f6c1cdd34dde6117368f70cd75ba1cec36aca40bf0f6e6"} Feb 16 21:32:08.240717 master-0 kubenswrapper[38936]: I0216 21:32:08.238331 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-55r4l" Feb 16 21:32:08.241623 master-0 kubenswrapper[38936]: I0216 21:32:08.241560 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" event={"ID":"8ab3a497-bb0b-422a-bf01-570e4aceb014","Type":"ContainerStarted","Data":"60fc6dac2bab54fc0a8f86a66d0a780ef7f8c69c1b7cf79e1c4a65d25ffde04b"} Feb 16 21:32:08.242006 master-0 kubenswrapper[38936]: I0216 21:32:08.241951 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" Feb 16 21:32:08.254533 master-0 kubenswrapper[38936]: I0216 21:32:08.252755 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" Feb 16 21:32:08.267358 master-0 kubenswrapper[38936]: I0216 21:32:08.267226 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh" podStartSLOduration=2.879440194 podStartE2EDuration="15.267190441s" podCreationTimestamp="2026-02-16 21:31:53 +0000 UTC" firstStartedPulling="2026-02-16 21:31:54.549121841 +0000 UTC m=+544.901125203" lastFinishedPulling="2026-02-16 21:32:06.936872088 +0000 UTC m=+557.288875450" observedRunningTime="2026-02-16 21:32:08.255573082 +0000 UTC m=+558.607576444" watchObservedRunningTime="2026-02-16 21:32:08.267190441 +0000 UTC m=+558.619193813" Feb 16 21:32:08.298415 master-0 kubenswrapper[38936]: I0216 21:32:08.295148 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-6zqfb" podStartSLOduration=3.245137435 podStartE2EDuration="15.295124948s" podCreationTimestamp="2026-02-16 21:31:53 +0000 UTC" firstStartedPulling="2026-02-16 21:31:54.942695457 +0000 UTC m=+545.294698819" lastFinishedPulling="2026-02-16 21:32:06.99268297 +0000 UTC m=+557.344686332" observedRunningTime="2026-02-16 21:32:08.289452192 +0000 UTC m=+558.641455554" watchObservedRunningTime="2026-02-16 21:32:08.295124948 +0000 UTC m=+558.647128310" Feb 16 21:32:08.358477 master-0 kubenswrapper[38936]: I0216 21:32:08.357443 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp" podStartSLOduration=3.283869027 podStartE2EDuration="15.357415717s" podCreationTimestamp="2026-02-16 21:31:53 +0000 UTC" firstStartedPulling="2026-02-16 21:31:54.853691573 +0000 UTC m=+545.205694935" lastFinishedPulling="2026-02-16 21:32:06.927238263 +0000 UTC m=+557.279241625" observedRunningTime="2026-02-16 21:32:08.343190287 +0000 UTC m=+558.695193649" watchObservedRunningTime="2026-02-16 21:32:08.357415717 +0000 UTC m=+558.709419079" Feb 16 21:32:08.375915 master-0 kubenswrapper[38936]: I0216 21:32:08.375298 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf" podStartSLOduration=3.000766875 podStartE2EDuration="15.375277448s" podCreationTimestamp="2026-02-16 21:31:53 +0000 UTC" firstStartedPulling="2026-02-16 21:31:54.547154047 +0000 UTC m=+544.899157409" lastFinishedPulling="2026-02-16 21:32:06.92166462 +0000 UTC m=+557.273667982" observedRunningTime="2026-02-16 21:32:08.374042284 +0000 UTC m=+558.726045636" watchObservedRunningTime="2026-02-16 21:32:08.375277448 +0000 UTC m=+558.727280810" Feb 16 21:32:08.405024 master-0 kubenswrapper[38936]: I0216 21:32:08.404933 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-55r4l" podStartSLOduration=3.462352078 podStartE2EDuration="15.404905621s" podCreationTimestamp="2026-02-16 21:31:53 +0000 UTC" firstStartedPulling="2026-02-16 21:31:55.025674695 +0000 UTC m=+545.377678057" lastFinishedPulling="2026-02-16 21:32:06.968228238 +0000 UTC m=+557.320231600" observedRunningTime="2026-02-16 21:32:08.398250049 +0000 UTC m=+558.750253431" watchObservedRunningTime="2026-02-16 21:32:08.404905621 +0000 UTC m=+558.756909003" Feb 16 21:32:14.424098 master-0 kubenswrapper[38936]: I0216 21:32:14.424022 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-55r4l" Feb 16 21:32:25.932352 master-0 kubenswrapper[38936]: I0216 21:32:25.932254 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-565c66c48f-6w268" Feb 16 21:32:33.172238 master-0 kubenswrapper[38936]: I0216 21:32:33.172153 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682"] Feb 16 21:32:33.178778 master-0 kubenswrapper[38936]: I0216 21:32:33.178731 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" Feb 16 21:32:33.185115 master-0 kubenswrapper[38936]: I0216 21:32:33.185033 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682"] Feb 16 21:32:33.185301 master-0 kubenswrapper[38936]: I0216 21:32:33.185205 38936 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 21:32:33.196997 master-0 kubenswrapper[38936]: I0216 21:32:33.196913 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fw88b"] Feb 16 21:32:33.206620 master-0 kubenswrapper[38936]: I0216 21:32:33.206554 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.210463 master-0 kubenswrapper[38936]: I0216 21:32:33.210109 38936 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 21:32:33.211267 master-0 kubenswrapper[38936]: I0216 21:32:33.210768 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 21:32:33.253224 master-0 kubenswrapper[38936]: I0216 21:32:33.253062 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-t6g4d"] Feb 16 21:32:33.254794 master-0 kubenswrapper[38936]: I0216 21:32:33.254766 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-t6g4d" Feb 16 21:32:33.256297 master-0 kubenswrapper[38936]: I0216 21:32:33.256261 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 21:32:33.258000 master-0 kubenswrapper[38936]: I0216 21:32:33.257972 38936 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 21:32:33.258183 master-0 kubenswrapper[38936]: I0216 21:32:33.258158 38936 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 21:32:33.279910 master-0 kubenswrapper[38936]: I0216 21:32:33.275993 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-r5mh6"] Feb 16 21:32:33.279910 master-0 kubenswrapper[38936]: I0216 21:32:33.277632 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:33.284435 master-0 kubenswrapper[38936]: I0216 21:32:33.284387 38936 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 21:32:33.294366 master-0 kubenswrapper[38936]: I0216 21:32:33.293169 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-r5mh6"] Feb 16 21:32:33.318852 master-0 kubenswrapper[38936]: I0216 21:32:33.318087 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-reloader\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.319420 master-0 kubenswrapper[38936]: I0216 21:32:33.319228 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-frr-sockets\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.319420 master-0 kubenswrapper[38936]: I0216 21:32:33.319334 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv482\" (UniqueName: \"kubernetes.io/projected/4fa88970-4939-4e43-8806-58b58525e2f9-kube-api-access-nv482\") pod \"frr-k8s-webhook-server-78b44bf5bb-q2682\" (UID: \"4fa88970-4939-4e43-8806-58b58525e2f9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" Feb 16 21:32:33.319528 master-0 kubenswrapper[38936]: I0216 21:32:33.319477 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7sc\" (UniqueName: \"kubernetes.io/projected/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-kube-api-access-jt7sc\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.319567 master-0 kubenswrapper[38936]: I0216 21:32:33.319523 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fa88970-4939-4e43-8806-58b58525e2f9-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q2682\" (UID: \"4fa88970-4939-4e43-8806-58b58525e2f9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" Feb 16 21:32:33.319603 master-0 kubenswrapper[38936]: I0216 21:32:33.319562 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-metrics-certs\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.319603 master-0 kubenswrapper[38936]: I0216 21:32:33.319593 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-frr-startup\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.319790 master-0 kubenswrapper[38936]: I0216 21:32:33.319716 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-frr-conf\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.319929 master-0 kubenswrapper[38936]: I0216 21:32:33.319901 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-metrics\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.437573 master-0 kubenswrapper[38936]: I0216 21:32:33.437384 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/69f77310-8724-4ec0-878f-dcdd5d84a51f-metallb-excludel2\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:33.438033 master-0 kubenswrapper[38936]: I0216 21:32:33.438003 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-reloader\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.438209 master-0 kubenswrapper[38936]: I0216 21:32:33.438171 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-memberlist\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:33.438360 master-0 kubenswrapper[38936]: I0216 21:32:33.438341 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-frr-sockets\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.438493 master-0 kubenswrapper[38936]: I0216 21:32:33.438470 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8v2q\" (UniqueName: \"kubernetes.io/projected/69f77310-8724-4ec0-878f-dcdd5d84a51f-kube-api-access-n8v2q\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:33.438638 master-0 kubenswrapper[38936]: I0216 21:32:33.438618 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv482\" (UniqueName: \"kubernetes.io/projected/4fa88970-4939-4e43-8806-58b58525e2f9-kube-api-access-nv482\") pod \"frr-k8s-webhook-server-78b44bf5bb-q2682\" (UID: \"4fa88970-4939-4e43-8806-58b58525e2f9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" Feb 16 21:32:33.438976 master-0 kubenswrapper[38936]: I0216 21:32:33.438891 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6a20f55-b6e2-4473-8ea0-7b04865962f7-cert\") pod \"controller-69bbfbf88f-r5mh6\" (UID: \"d6a20f55-b6e2-4473-8ea0-7b04865962f7\") " pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:33.439103 master-0 kubenswrapper[38936]: I0216 21:32:33.439076 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7sc\" (UniqueName: \"kubernetes.io/projected/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-kube-api-access-jt7sc\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.439174 master-0 kubenswrapper[38936]: I0216 21:32:33.439129 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fa88970-4939-4e43-8806-58b58525e2f9-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q2682\" (UID: \"4fa88970-4939-4e43-8806-58b58525e2f9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" Feb 16 21:32:33.439230 master-0 kubenswrapper[38936]: I0216 21:32:33.439170 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlt92\" (UniqueName: \"kubernetes.io/projected/d6a20f55-b6e2-4473-8ea0-7b04865962f7-kube-api-access-wlt92\") pod \"controller-69bbfbf88f-r5mh6\" (UID: \"d6a20f55-b6e2-4473-8ea0-7b04865962f7\") " pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:33.439230 master-0 kubenswrapper[38936]: I0216 21:32:33.439200 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6a20f55-b6e2-4473-8ea0-7b04865962f7-metrics-certs\") pod \"controller-69bbfbf88f-r5mh6\" (UID: \"d6a20f55-b6e2-4473-8ea0-7b04865962f7\") " pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:33.439329 master-0 kubenswrapper[38936]: I0216 21:32:33.439236 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-metrics-certs\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.439329 master-0 kubenswrapper[38936]: I0216 21:32:33.439273 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-frr-startup\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.439329 master-0 kubenswrapper[38936]: I0216 21:32:33.439297 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-metrics-certs\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:33.439329 master-0 kubenswrapper[38936]: I0216 21:32:33.439325 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-frr-conf\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.439536 master-0 kubenswrapper[38936]: I0216 21:32:33.439403 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-metrics\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.440015 master-0 kubenswrapper[38936]: I0216 21:32:33.439984 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-metrics\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.447684 master-0 kubenswrapper[38936]: I0216 21:32:33.444038 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-reloader\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.447684 master-0 kubenswrapper[38936]: I0216 21:32:33.444373 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-frr-conf\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.447684 master-0 kubenswrapper[38936]: I0216 21:32:33.446674 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4fa88970-4939-4e43-8806-58b58525e2f9-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-q2682\" (UID: \"4fa88970-4939-4e43-8806-58b58525e2f9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" Feb 16 21:32:33.448581 master-0 kubenswrapper[38936]: I0216 21:32:33.448536 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-frr-sockets\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.452667 master-0 kubenswrapper[38936]: I0216 21:32:33.448834 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-metrics-certs\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.452667 master-0 kubenswrapper[38936]: I0216 21:32:33.449345 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-frr-startup\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.492681 master-0 kubenswrapper[38936]: I0216 21:32:33.487326 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7sc\" (UniqueName: \"kubernetes.io/projected/e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7-kube-api-access-jt7sc\") pod \"frr-k8s-fw88b\" (UID: \"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7\") " pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.505678 master-0 kubenswrapper[38936]: I0216 21:32:33.502854 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv482\" (UniqueName: \"kubernetes.io/projected/4fa88970-4939-4e43-8806-58b58525e2f9-kube-api-access-nv482\") pod \"frr-k8s-webhook-server-78b44bf5bb-q2682\" (UID: \"4fa88970-4939-4e43-8806-58b58525e2f9\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" Feb 16 21:32:33.541490 master-0 kubenswrapper[38936]: I0216 21:32:33.541412 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8v2q\" (UniqueName: \"kubernetes.io/projected/69f77310-8724-4ec0-878f-dcdd5d84a51f-kube-api-access-n8v2q\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:33.541490 master-0 kubenswrapper[38936]: I0216 21:32:33.541495 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6a20f55-b6e2-4473-8ea0-7b04865962f7-cert\") pod \"controller-69bbfbf88f-r5mh6\" (UID: \"d6a20f55-b6e2-4473-8ea0-7b04865962f7\") " pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:33.541961 master-0 kubenswrapper[38936]: I0216 21:32:33.541534 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlt92\" (UniqueName: \"kubernetes.io/projected/d6a20f55-b6e2-4473-8ea0-7b04865962f7-kube-api-access-wlt92\") pod \"controller-69bbfbf88f-r5mh6\" (UID: \"d6a20f55-b6e2-4473-8ea0-7b04865962f7\") " pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:33.541961 master-0 kubenswrapper[38936]: I0216 21:32:33.541556 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6a20f55-b6e2-4473-8ea0-7b04865962f7-metrics-certs\") pod \"controller-69bbfbf88f-r5mh6\" (UID: \"d6a20f55-b6e2-4473-8ea0-7b04865962f7\") " pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:33.541961 master-0 kubenswrapper[38936]: I0216 21:32:33.541578 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-metrics-certs\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:33.541961 master-0 kubenswrapper[38936]: I0216 21:32:33.541627 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/69f77310-8724-4ec0-878f-dcdd5d84a51f-metallb-excludel2\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:33.541961 master-0 kubenswrapper[38936]: I0216 21:32:33.541674 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-memberlist\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:33.541961 master-0 kubenswrapper[38936]: E0216 21:32:33.541800 38936 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 21:32:33.541961 master-0 kubenswrapper[38936]: E0216 21:32:33.541853 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-memberlist podName:69f77310-8724-4ec0-878f-dcdd5d84a51f nodeName:}" failed. No retries permitted until 2026-02-16 21:32:34.041836238 +0000 UTC m=+584.393839600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-memberlist") pod "speaker-t6g4d" (UID: "69f77310-8724-4ec0-878f-dcdd5d84a51f") : secret "metallb-memberlist" not found Feb 16 21:32:33.542202 master-0 kubenswrapper[38936]: E0216 21:32:33.542047 38936 secret.go:189] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 16 21:32:33.542202 master-0 kubenswrapper[38936]: E0216 21:32:33.542138 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-metrics-certs podName:69f77310-8724-4ec0-878f-dcdd5d84a51f nodeName:}" failed. No retries permitted until 2026-02-16 21:32:34.042115216 +0000 UTC m=+584.394118658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-metrics-certs") pod "speaker-t6g4d" (UID: "69f77310-8724-4ec0-878f-dcdd5d84a51f") : secret "speaker-certs-secret" not found Feb 16 21:32:33.542848 master-0 kubenswrapper[38936]: I0216 21:32:33.542819 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/69f77310-8724-4ec0-878f-dcdd5d84a51f-metallb-excludel2\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:33.546913 master-0 kubenswrapper[38936]: I0216 21:32:33.546861 38936 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 21:32:33.548408 master-0 kubenswrapper[38936]: I0216 21:32:33.547303 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6a20f55-b6e2-4473-8ea0-7b04865962f7-metrics-certs\") pod \"controller-69bbfbf88f-r5mh6\" (UID: \"d6a20f55-b6e2-4473-8ea0-7b04865962f7\") " pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:33.557127 master-0 kubenswrapper[38936]: I0216 21:32:33.556894 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6a20f55-b6e2-4473-8ea0-7b04865962f7-cert\") pod \"controller-69bbfbf88f-r5mh6\" (UID: \"d6a20f55-b6e2-4473-8ea0-7b04865962f7\") " pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:33.564745 master-0 kubenswrapper[38936]: I0216 21:32:33.564432 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlt92\" (UniqueName: \"kubernetes.io/projected/d6a20f55-b6e2-4473-8ea0-7b04865962f7-kube-api-access-wlt92\") pod \"controller-69bbfbf88f-r5mh6\" (UID: \"d6a20f55-b6e2-4473-8ea0-7b04865962f7\") " pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:33.566008 master-0 kubenswrapper[38936]: I0216 21:32:33.565976 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8v2q\" (UniqueName: \"kubernetes.io/projected/69f77310-8724-4ec0-878f-dcdd5d84a51f-kube-api-access-n8v2q\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:33.586299 master-0 kubenswrapper[38936]: I0216 21:32:33.586227 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" Feb 16 21:32:33.592190 master-0 kubenswrapper[38936]: I0216 21:32:33.592139 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:33.622710 master-0 kubenswrapper[38936]: I0216 21:32:33.622663 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:34.031532 master-0 kubenswrapper[38936]: I0216 21:32:34.029724 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682"] Feb 16 21:32:34.032324 master-0 kubenswrapper[38936]: W0216 21:32:34.032145 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fa88970_4939_4e43_8806_58b58525e2f9.slice/crio-725e47d9be34e238abb320c23d9d8cf369b6dd7c9a72ae407ed1f2c741da6895 WatchSource:0}: Error finding container 725e47d9be34e238abb320c23d9d8cf369b6dd7c9a72ae407ed1f2c741da6895: Status 404 returned error can't find the container with id 725e47d9be34e238abb320c23d9d8cf369b6dd7c9a72ae407ed1f2c741da6895 Feb 16 21:32:34.053632 master-0 kubenswrapper[38936]: I0216 21:32:34.053533 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-metrics-certs\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:34.053895 master-0 kubenswrapper[38936]: I0216 21:32:34.053847 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-memberlist\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:34.056884 master-0 kubenswrapper[38936]: E0216 21:32:34.056822 38936 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 21:32:34.059395 master-0 kubenswrapper[38936]: E0216 21:32:34.059343 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-memberlist podName:69f77310-8724-4ec0-878f-dcdd5d84a51f nodeName:}" failed. No retries permitted until 2026-02-16 21:32:35.059301025 +0000 UTC m=+585.411304437 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-memberlist") pod "speaker-t6g4d" (UID: "69f77310-8724-4ec0-878f-dcdd5d84a51f") : secret "metallb-memberlist" not found Feb 16 21:32:34.060527 master-0 kubenswrapper[38936]: I0216 21:32:34.060457 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-metrics-certs\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:34.130505 master-0 kubenswrapper[38936]: I0216 21:32:34.130439 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-r5mh6"] Feb 16 21:32:34.148685 master-0 kubenswrapper[38936]: W0216 21:32:34.148029 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6a20f55_b6e2_4473_8ea0_7b04865962f7.slice/crio-13da63f858daa6469d41cb937b7efd54d7ae47385f81a89d6d78cac5ee83194e WatchSource:0}: Error finding container 13da63f858daa6469d41cb937b7efd54d7ae47385f81a89d6d78cac5ee83194e: Status 404 returned error can't find the container with id 13da63f858daa6469d41cb937b7efd54d7ae47385f81a89d6d78cac5ee83194e Feb 16 21:32:34.483776 master-0 kubenswrapper[38936]: I0216 21:32:34.483704 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" event={"ID":"4fa88970-4939-4e43-8806-58b58525e2f9","Type":"ContainerStarted","Data":"725e47d9be34e238abb320c23d9d8cf369b6dd7c9a72ae407ed1f2c741da6895"} Feb 16 21:32:34.485165 master-0 kubenswrapper[38936]: I0216 21:32:34.485133 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw88b" event={"ID":"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7","Type":"ContainerStarted","Data":"d81fa4ab314fd6c858792a8420444153bf6af50fc10760816e2ae34bf59a1124"} Feb 16 21:32:34.487236 master-0 kubenswrapper[38936]: I0216 21:32:34.487202 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-r5mh6" event={"ID":"d6a20f55-b6e2-4473-8ea0-7b04865962f7","Type":"ContainerStarted","Data":"485773e1f4ea35b231199a01e074d549e85b6c807e02f3f638acca6b9295ee30"} Feb 16 21:32:34.487356 master-0 kubenswrapper[38936]: I0216 21:32:34.487286 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-r5mh6" event={"ID":"d6a20f55-b6e2-4473-8ea0-7b04865962f7","Type":"ContainerStarted","Data":"13da63f858daa6469d41cb937b7efd54d7ae47385f81a89d6d78cac5ee83194e"} Feb 16 21:32:35.097304 master-0 kubenswrapper[38936]: I0216 21:32:35.097025 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-memberlist\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:35.101063 master-0 kubenswrapper[38936]: I0216 21:32:35.101022 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/69f77310-8724-4ec0-878f-dcdd5d84a51f-memberlist\") pod \"speaker-t6g4d\" (UID: \"69f77310-8724-4ec0-878f-dcdd5d84a51f\") " pod="metallb-system/speaker-t6g4d" Feb 16 21:32:35.114201 master-0 kubenswrapper[38936]: I0216 21:32:35.114150 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-t6g4d" Feb 16 21:32:35.143950 master-0 kubenswrapper[38936]: W0216 21:32:35.143884 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69f77310_8724_4ec0_878f_dcdd5d84a51f.slice/crio-ed81fcb6ea090bfe0d2a68dc2d3bd04a2dfb0afb94d9523aa846cfdc138f2972 WatchSource:0}: Error finding container ed81fcb6ea090bfe0d2a68dc2d3bd04a2dfb0afb94d9523aa846cfdc138f2972: Status 404 returned error can't find the container with id ed81fcb6ea090bfe0d2a68dc2d3bd04a2dfb0afb94d9523aa846cfdc138f2972 Feb 16 21:32:35.241750 master-0 kubenswrapper[38936]: I0216 21:32:35.239310 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c"] Feb 16 21:32:35.241750 master-0 kubenswrapper[38936]: I0216 21:32:35.241063 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c" Feb 16 21:32:35.254779 master-0 kubenswrapper[38936]: I0216 21:32:35.254706 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b"] Feb 16 21:32:35.264184 master-0 kubenswrapper[38936]: I0216 21:32:35.264134 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" Feb 16 21:32:35.268597 master-0 kubenswrapper[38936]: I0216 21:32:35.268545 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 21:32:35.301993 master-0 kubenswrapper[38936]: I0216 21:32:35.301291 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qthvw\" (UniqueName: \"kubernetes.io/projected/489b4ac1-998e-4296-89e6-f23e1ddce5b5-kube-api-access-qthvw\") pod \"nmstate-metrics-58c85c668d-h2l2c\" (UID: \"489b4ac1-998e-4296-89e6-f23e1ddce5b5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c" Feb 16 21:32:35.301993 master-0 kubenswrapper[38936]: I0216 21:32:35.301412 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/409c7541-d48d-4457-b3de-2cf5e517cb53-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7g24b\" (UID: \"409c7541-d48d-4457-b3de-2cf5e517cb53\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" Feb 16 21:32:35.301993 master-0 kubenswrapper[38936]: I0216 21:32:35.301495 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2x2v\" (UniqueName: \"kubernetes.io/projected/409c7541-d48d-4457-b3de-2cf5e517cb53-kube-api-access-r2x2v\") pod \"nmstate-webhook-866bcb46dc-7g24b\" (UID: \"409c7541-d48d-4457-b3de-2cf5e517cb53\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" Feb 16 21:32:35.302310 master-0 kubenswrapper[38936]: I0216 21:32:35.302280 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c"] Feb 16 21:32:35.338441 master-0 kubenswrapper[38936]: I0216 21:32:35.338278 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b"] Feb 16 21:32:35.350130 master-0 kubenswrapper[38936]: I0216 21:32:35.349719 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-vzqn2"] Feb 16 21:32:35.351460 master-0 kubenswrapper[38936]: I0216 21:32:35.351277 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.405716 master-0 kubenswrapper[38936]: I0216 21:32:35.405655 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2x2v\" (UniqueName: \"kubernetes.io/projected/409c7541-d48d-4457-b3de-2cf5e517cb53-kube-api-access-r2x2v\") pod \"nmstate-webhook-866bcb46dc-7g24b\" (UID: \"409c7541-d48d-4457-b3de-2cf5e517cb53\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" Feb 16 21:32:35.405918 master-0 kubenswrapper[38936]: I0216 21:32:35.405741 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qthvw\" (UniqueName: \"kubernetes.io/projected/489b4ac1-998e-4296-89e6-f23e1ddce5b5-kube-api-access-qthvw\") pod \"nmstate-metrics-58c85c668d-h2l2c\" (UID: \"489b4ac1-998e-4296-89e6-f23e1ddce5b5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c" Feb 16 21:32:35.405918 master-0 kubenswrapper[38936]: I0216 21:32:35.405773 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5cd6df85-0f2d-4608-81b8-238551c647d3-nmstate-lock\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.405918 master-0 kubenswrapper[38936]: I0216 21:32:35.405811 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46dv\" (UniqueName: \"kubernetes.io/projected/5cd6df85-0f2d-4608-81b8-238551c647d3-kube-api-access-f46dv\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.405918 master-0 kubenswrapper[38936]: I0216 21:32:35.405836 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5cd6df85-0f2d-4608-81b8-238551c647d3-dbus-socket\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.405918 master-0 kubenswrapper[38936]: I0216 21:32:35.405863 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/409c7541-d48d-4457-b3de-2cf5e517cb53-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7g24b\" (UID: \"409c7541-d48d-4457-b3de-2cf5e517cb53\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" Feb 16 21:32:35.405918 master-0 kubenswrapper[38936]: I0216 21:32:35.405883 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5cd6df85-0f2d-4608-81b8-238551c647d3-ovs-socket\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.411807 master-0 kubenswrapper[38936]: I0216 21:32:35.411557 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/409c7541-d48d-4457-b3de-2cf5e517cb53-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-7g24b\" (UID: \"409c7541-d48d-4457-b3de-2cf5e517cb53\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" Feb 16 21:32:35.438279 master-0 kubenswrapper[38936]: I0216 21:32:35.438219 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j"] Feb 16 21:32:35.439431 master-0 kubenswrapper[38936]: I0216 21:32:35.439383 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" Feb 16 21:32:35.441339 master-0 kubenswrapper[38936]: I0216 21:32:35.441310 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 21:32:35.441595 master-0 kubenswrapper[38936]: I0216 21:32:35.441567 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 21:32:35.442290 master-0 kubenswrapper[38936]: I0216 21:32:35.442246 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2x2v\" (UniqueName: \"kubernetes.io/projected/409c7541-d48d-4457-b3de-2cf5e517cb53-kube-api-access-r2x2v\") pod \"nmstate-webhook-866bcb46dc-7g24b\" (UID: \"409c7541-d48d-4457-b3de-2cf5e517cb53\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" Feb 16 21:32:35.442579 master-0 kubenswrapper[38936]: I0216 21:32:35.442541 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qthvw\" (UniqueName: \"kubernetes.io/projected/489b4ac1-998e-4296-89e6-f23e1ddce5b5-kube-api-access-qthvw\") pod \"nmstate-metrics-58c85c668d-h2l2c\" (UID: \"489b4ac1-998e-4296-89e6-f23e1ddce5b5\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c" Feb 16 21:32:35.469872 master-0 kubenswrapper[38936]: I0216 21:32:35.469807 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j"] Feb 16 21:32:35.498245 master-0 kubenswrapper[38936]: I0216 21:32:35.498190 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t6g4d" event={"ID":"69f77310-8724-4ec0-878f-dcdd5d84a51f","Type":"ContainerStarted","Data":"ed81fcb6ea090bfe0d2a68dc2d3bd04a2dfb0afb94d9523aa846cfdc138f2972"} Feb 16 21:32:35.507428 master-0 kubenswrapper[38936]: I0216 21:32:35.507169 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4fbbe701-4f45-4220-ad67-1fb9d5e546b6-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-cg75j\" (UID: \"4fbbe701-4f45-4220-ad67-1fb9d5e546b6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" Feb 16 21:32:35.507428 master-0 kubenswrapper[38936]: I0216 21:32:35.507232 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f46dv\" (UniqueName: \"kubernetes.io/projected/5cd6df85-0f2d-4608-81b8-238551c647d3-kube-api-access-f46dv\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.507428 master-0 kubenswrapper[38936]: I0216 21:32:35.507282 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzfq8\" (UniqueName: \"kubernetes.io/projected/4fbbe701-4f45-4220-ad67-1fb9d5e546b6-kube-api-access-nzfq8\") pod \"nmstate-console-plugin-5c78fc5d65-cg75j\" (UID: \"4fbbe701-4f45-4220-ad67-1fb9d5e546b6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" Feb 16 21:32:35.507428 master-0 kubenswrapper[38936]: I0216 21:32:35.507309 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5cd6df85-0f2d-4608-81b8-238551c647d3-dbus-socket\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.507428 master-0 kubenswrapper[38936]: I0216 21:32:35.507339 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fbbe701-4f45-4220-ad67-1fb9d5e546b6-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-cg75j\" (UID: \"4fbbe701-4f45-4220-ad67-1fb9d5e546b6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" Feb 16 21:32:35.507428 master-0 kubenswrapper[38936]: I0216 21:32:35.507372 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5cd6df85-0f2d-4608-81b8-238551c647d3-ovs-socket\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.507778 master-0 kubenswrapper[38936]: I0216 21:32:35.507451 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5cd6df85-0f2d-4608-81b8-238551c647d3-nmstate-lock\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.507778 master-0 kubenswrapper[38936]: I0216 21:32:35.507527 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5cd6df85-0f2d-4608-81b8-238551c647d3-nmstate-lock\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.507902 master-0 kubenswrapper[38936]: I0216 21:32:35.507876 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5cd6df85-0f2d-4608-81b8-238551c647d3-dbus-socket\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.508052 master-0 kubenswrapper[38936]: I0216 21:32:35.507914 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5cd6df85-0f2d-4608-81b8-238551c647d3-ovs-socket\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.555976 master-0 kubenswrapper[38936]: I0216 21:32:35.555941 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f46dv\" (UniqueName: \"kubernetes.io/projected/5cd6df85-0f2d-4608-81b8-238551c647d3-kube-api-access-f46dv\") pod \"nmstate-handler-vzqn2\" (UID: \"5cd6df85-0f2d-4608-81b8-238551c647d3\") " pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.580825 master-0 kubenswrapper[38936]: I0216 21:32:35.580767 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c" Feb 16 21:32:35.601834 master-0 kubenswrapper[38936]: I0216 21:32:35.601693 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" Feb 16 21:32:35.609281 master-0 kubenswrapper[38936]: I0216 21:32:35.609182 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4fbbe701-4f45-4220-ad67-1fb9d5e546b6-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-cg75j\" (UID: \"4fbbe701-4f45-4220-ad67-1fb9d5e546b6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" Feb 16 21:32:35.609281 master-0 kubenswrapper[38936]: I0216 21:32:35.609238 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzfq8\" (UniqueName: \"kubernetes.io/projected/4fbbe701-4f45-4220-ad67-1fb9d5e546b6-kube-api-access-nzfq8\") pod \"nmstate-console-plugin-5c78fc5d65-cg75j\" (UID: \"4fbbe701-4f45-4220-ad67-1fb9d5e546b6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" Feb 16 21:32:35.609281 master-0 kubenswrapper[38936]: I0216 21:32:35.609277 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fbbe701-4f45-4220-ad67-1fb9d5e546b6-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-cg75j\" (UID: \"4fbbe701-4f45-4220-ad67-1fb9d5e546b6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" Feb 16 21:32:35.612611 master-0 kubenswrapper[38936]: I0216 21:32:35.612576 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fbbe701-4f45-4220-ad67-1fb9d5e546b6-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-cg75j\" (UID: \"4fbbe701-4f45-4220-ad67-1fb9d5e546b6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" Feb 16 21:32:35.613472 master-0 kubenswrapper[38936]: I0216 21:32:35.613443 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4fbbe701-4f45-4220-ad67-1fb9d5e546b6-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-cg75j\" (UID: \"4fbbe701-4f45-4220-ad67-1fb9d5e546b6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" Feb 16 21:32:35.630180 master-0 kubenswrapper[38936]: I0216 21:32:35.630130 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f4ffb8c59-dzhgj"] Feb 16 21:32:35.631497 master-0 kubenswrapper[38936]: I0216 21:32:35.631456 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.640871 master-0 kubenswrapper[38936]: I0216 21:32:35.640682 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzfq8\" (UniqueName: \"kubernetes.io/projected/4fbbe701-4f45-4220-ad67-1fb9d5e546b6-kube-api-access-nzfq8\") pod \"nmstate-console-plugin-5c78fc5d65-cg75j\" (UID: \"4fbbe701-4f45-4220-ad67-1fb9d5e546b6\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" Feb 16 21:32:35.658365 master-0 kubenswrapper[38936]: I0216 21:32:35.658317 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f4ffb8c59-dzhgj"] Feb 16 21:32:35.676469 master-0 kubenswrapper[38936]: I0216 21:32:35.676414 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:35.710859 master-0 kubenswrapper[38936]: I0216 21:32:35.710758 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4frv\" (UniqueName: \"kubernetes.io/projected/ba37db1c-3406-48b4-b8d3-9c89f03a273f-kube-api-access-p4frv\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.711159 master-0 kubenswrapper[38936]: I0216 21:32:35.710916 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba37db1c-3406-48b4-b8d3-9c89f03a273f-console-serving-cert\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.711159 master-0 kubenswrapper[38936]: I0216 21:32:35.711093 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-console-config\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.711243 master-0 kubenswrapper[38936]: I0216 21:32:35.711177 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-oauth-serving-cert\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.711424 master-0 kubenswrapper[38936]: I0216 21:32:35.711366 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-trusted-ca-bundle\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.711424 master-0 kubenswrapper[38936]: I0216 21:32:35.711423 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-service-ca\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.711514 master-0 kubenswrapper[38936]: I0216 21:32:35.711499 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ba37db1c-3406-48b4-b8d3-9c89f03a273f-console-oauth-config\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.731620 master-0 kubenswrapper[38936]: W0216 21:32:35.731557 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cd6df85_0f2d_4608_81b8_238551c647d3.slice/crio-8765218db25aff072eebef602ccff10d1f664524ada28ac06762571b95a2114f WatchSource:0}: Error finding container 8765218db25aff072eebef602ccff10d1f664524ada28ac06762571b95a2114f: Status 404 returned error can't find the container with id 8765218db25aff072eebef602ccff10d1f664524ada28ac06762571b95a2114f Feb 16 21:32:35.816395 master-0 kubenswrapper[38936]: I0216 21:32:35.816330 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-console-config\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.816569 master-0 kubenswrapper[38936]: I0216 21:32:35.816409 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-oauth-serving-cert\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.816569 master-0 kubenswrapper[38936]: I0216 21:32:35.816533 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-trusted-ca-bundle\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.816569 master-0 kubenswrapper[38936]: I0216 21:32:35.816555 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-service-ca\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.817486 master-0 kubenswrapper[38936]: I0216 21:32:35.817443 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-console-config\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.817867 master-0 kubenswrapper[38936]: I0216 21:32:35.817845 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" Feb 16 21:32:35.818278 master-0 kubenswrapper[38936]: I0216 21:32:35.818257 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ba37db1c-3406-48b4-b8d3-9c89f03a273f-console-oauth-config\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.818403 master-0 kubenswrapper[38936]: I0216 21:32:35.818379 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4frv\" (UniqueName: \"kubernetes.io/projected/ba37db1c-3406-48b4-b8d3-9c89f03a273f-kube-api-access-p4frv\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.818547 master-0 kubenswrapper[38936]: I0216 21:32:35.818419 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba37db1c-3406-48b4-b8d3-9c89f03a273f-console-serving-cert\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.829088 master-0 kubenswrapper[38936]: I0216 21:32:35.829025 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-oauth-serving-cert\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.831143 master-0 kubenswrapper[38936]: I0216 21:32:35.831114 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba37db1c-3406-48b4-b8d3-9c89f03a273f-console-serving-cert\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.831357 master-0 kubenswrapper[38936]: I0216 21:32:35.831321 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-trusted-ca-bundle\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.831511 master-0 kubenswrapper[38936]: I0216 21:32:35.831474 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba37db1c-3406-48b4-b8d3-9c89f03a273f-service-ca\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.835526 master-0 kubenswrapper[38936]: I0216 21:32:35.835486 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ba37db1c-3406-48b4-b8d3-9c89f03a273f-console-oauth-config\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:35.840858 master-0 kubenswrapper[38936]: I0216 21:32:35.840820 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4frv\" (UniqueName: \"kubernetes.io/projected/ba37db1c-3406-48b4-b8d3-9c89f03a273f-kube-api-access-p4frv\") pod \"console-7f4ffb8c59-dzhgj\" (UID: \"ba37db1c-3406-48b4-b8d3-9c89f03a273f\") " pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:36.014784 master-0 kubenswrapper[38936]: I0216 21:32:36.011320 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:36.201570 master-0 kubenswrapper[38936]: I0216 21:32:36.201515 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b"] Feb 16 21:32:36.313409 master-0 kubenswrapper[38936]: I0216 21:32:36.313357 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c"] Feb 16 21:32:36.417704 master-0 kubenswrapper[38936]: I0216 21:32:36.417638 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j"] Feb 16 21:32:36.542096 master-0 kubenswrapper[38936]: I0216 21:32:36.542019 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" event={"ID":"4fbbe701-4f45-4220-ad67-1fb9d5e546b6","Type":"ContainerStarted","Data":"8933558c620add98370e80a31e38210a6d0805068468ab44bd8761120bcb924e"} Feb 16 21:32:36.546770 master-0 kubenswrapper[38936]: I0216 21:32:36.546727 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t6g4d" event={"ID":"69f77310-8724-4ec0-878f-dcdd5d84a51f","Type":"ContainerStarted","Data":"32b8e7551a26c77a3f8159f6b5368087d2e685dde03e87672394ec43bfcf84d5"} Feb 16 21:32:36.563572 master-0 kubenswrapper[38936]: I0216 21:32:36.563494 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" event={"ID":"409c7541-d48d-4457-b3de-2cf5e517cb53","Type":"ContainerStarted","Data":"f49a1e2282ea7a1da21ea51f53262f015521f518acb9aa70811383b2b502485b"} Feb 16 21:32:36.568330 master-0 kubenswrapper[38936]: I0216 21:32:36.568269 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c" event={"ID":"489b4ac1-998e-4296-89e6-f23e1ddce5b5","Type":"ContainerStarted","Data":"b2bf4feea1d2f47edd0e84ba4b2f4f5f73f9decc506f19f6fb74fc39e978fc58"} Feb 16 21:32:36.577003 master-0 kubenswrapper[38936]: I0216 21:32:36.576940 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vzqn2" event={"ID":"5cd6df85-0f2d-4608-81b8-238551c647d3","Type":"ContainerStarted","Data":"8765218db25aff072eebef602ccff10d1f664524ada28ac06762571b95a2114f"} Feb 16 21:32:36.608707 master-0 kubenswrapper[38936]: I0216 21:32:36.608622 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f4ffb8c59-dzhgj"] Feb 16 21:32:37.586228 master-0 kubenswrapper[38936]: I0216 21:32:37.586164 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f4ffb8c59-dzhgj" event={"ID":"ba37db1c-3406-48b4-b8d3-9c89f03a273f","Type":"ContainerStarted","Data":"968510de6f616974f05b9278edd86b409d70889e24807e949b1504b61d1528cb"} Feb 16 21:32:37.586228 master-0 kubenswrapper[38936]: I0216 21:32:37.586233 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f4ffb8c59-dzhgj" event={"ID":"ba37db1c-3406-48b4-b8d3-9c89f03a273f","Type":"ContainerStarted","Data":"1677139707bc110eaaa0e31079b785f4bda4c9ba15fb4fad8549300053ecc88c"} Feb 16 21:32:37.622110 master-0 kubenswrapper[38936]: I0216 21:32:37.622035 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f4ffb8c59-dzhgj" podStartSLOduration=2.622009807 podStartE2EDuration="2.622009807s" podCreationTimestamp="2026-02-16 21:32:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:32:37.605243996 +0000 UTC m=+587.957247378" watchObservedRunningTime="2026-02-16 21:32:37.622009807 +0000 UTC m=+587.974013169" Feb 16 21:32:38.607928 master-0 kubenswrapper[38936]: I0216 21:32:38.607863 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-r5mh6" event={"ID":"d6a20f55-b6e2-4473-8ea0-7b04865962f7","Type":"ContainerStarted","Data":"41690e3b46ba1dce3e6f0b5c1a3cd231697bb9ae56ad12ae654b9ac245ee91a1"} Feb 16 21:32:38.608557 master-0 kubenswrapper[38936]: I0216 21:32:38.608080 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:38.612726 master-0 kubenswrapper[38936]: I0216 21:32:38.612672 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-t6g4d" event={"ID":"69f77310-8724-4ec0-878f-dcdd5d84a51f","Type":"ContainerStarted","Data":"3ccead765cc7ba1ce75dfecb9452c2d2407d5364207bf986fc9af27bd9e0dffe"} Feb 16 21:32:38.630688 master-0 kubenswrapper[38936]: I0216 21:32:38.630438 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-r5mh6" podStartSLOduration=2.39919118 podStartE2EDuration="5.630412811s" podCreationTimestamp="2026-02-16 21:32:33 +0000 UTC" firstStartedPulling="2026-02-16 21:32:34.302949144 +0000 UTC m=+584.654952506" lastFinishedPulling="2026-02-16 21:32:37.534170775 +0000 UTC m=+587.886174137" observedRunningTime="2026-02-16 21:32:38.628871759 +0000 UTC m=+588.980875121" watchObservedRunningTime="2026-02-16 21:32:38.630412811 +0000 UTC m=+588.982416173" Feb 16 21:32:38.651985 master-0 kubenswrapper[38936]: I0216 21:32:38.651903 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-t6g4d" podStartSLOduration=3.655499951 podStartE2EDuration="5.65188373s" podCreationTimestamp="2026-02-16 21:32:33 +0000 UTC" firstStartedPulling="2026-02-16 21:32:35.563538702 +0000 UTC m=+585.915542064" lastFinishedPulling="2026-02-16 21:32:37.559922481 +0000 UTC m=+587.911925843" observedRunningTime="2026-02-16 21:32:38.65004461 +0000 UTC m=+589.002048002" watchObservedRunningTime="2026-02-16 21:32:38.65188373 +0000 UTC m=+589.003887082" Feb 16 21:32:39.630677 master-0 kubenswrapper[38936]: I0216 21:32:39.625232 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-t6g4d" Feb 16 21:32:43.677575 master-0 kubenswrapper[38936]: I0216 21:32:43.677423 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vzqn2" event={"ID":"5cd6df85-0f2d-4608-81b8-238551c647d3","Type":"ContainerStarted","Data":"8a62035c478f9a5aa9be3df6c9165d29fb6e5a6d9f1b0dd44be28b3727685c13"} Feb 16 21:32:43.678958 master-0 kubenswrapper[38936]: I0216 21:32:43.678907 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:43.682885 master-0 kubenswrapper[38936]: I0216 21:32:43.682727 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" event={"ID":"4fbbe701-4f45-4220-ad67-1fb9d5e546b6","Type":"ContainerStarted","Data":"26b4146a4d1ac0d2813e4da12096142d3a8a85b0b66b740c8b1b2053eb5c811b"} Feb 16 21:32:43.686613 master-0 kubenswrapper[38936]: I0216 21:32:43.686531 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" event={"ID":"4fa88970-4939-4e43-8806-58b58525e2f9","Type":"ContainerStarted","Data":"2dd6b2b755a100dbe97c725c06238ec232bb7177aab8d2f78dd62fb079c565a0"} Feb 16 21:32:43.686746 master-0 kubenswrapper[38936]: I0216 21:32:43.686638 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" Feb 16 21:32:43.688771 master-0 kubenswrapper[38936]: I0216 21:32:43.688700 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" event={"ID":"409c7541-d48d-4457-b3de-2cf5e517cb53","Type":"ContainerStarted","Data":"bf318d7ab66b28232c5bc2554b4e6ec10c646d28473f5299c1fa4ba951677918"} Feb 16 21:32:43.688845 master-0 kubenswrapper[38936]: I0216 21:32:43.688817 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" Feb 16 21:32:43.691533 master-0 kubenswrapper[38936]: I0216 21:32:43.691466 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c" event={"ID":"489b4ac1-998e-4296-89e6-f23e1ddce5b5","Type":"ContainerStarted","Data":"9249e0bd6ed35deb8cd01ffff6f93dfc30a18f58f5baae3e6e6bd8cbc79f1f14"} Feb 16 21:32:43.691611 master-0 kubenswrapper[38936]: I0216 21:32:43.691540 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c" event={"ID":"489b4ac1-998e-4296-89e6-f23e1ddce5b5","Type":"ContainerStarted","Data":"4d868c0bbca7297db1e2a2f49830ecb0806c3ed013b1b0a7df59c9025947d176"} Feb 16 21:32:43.693493 master-0 kubenswrapper[38936]: I0216 21:32:43.693448 38936 generic.go:334] "Generic (PLEG): container finished" podID="e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7" containerID="893046de22933954656152c68efdbe1f2bfae1444c3f0929abfe8a604278e994" exitCode=0 Feb 16 21:32:43.693567 master-0 kubenswrapper[38936]: I0216 21:32:43.693507 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw88b" event={"ID":"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7","Type":"ContainerDied","Data":"893046de22933954656152c68efdbe1f2bfae1444c3f0929abfe8a604278e994"} Feb 16 21:32:43.718929 master-0 kubenswrapper[38936]: I0216 21:32:43.718835 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-vzqn2" podStartSLOduration=1.3372647149999999 podStartE2EDuration="8.718786829s" podCreationTimestamp="2026-02-16 21:32:35 +0000 UTC" firstStartedPulling="2026-02-16 21:32:35.735996888 +0000 UTC m=+586.088000250" lastFinishedPulling="2026-02-16 21:32:43.117519002 +0000 UTC m=+593.469522364" observedRunningTime="2026-02-16 21:32:43.702727857 +0000 UTC m=+594.054731219" watchObservedRunningTime="2026-02-16 21:32:43.718786829 +0000 UTC m=+594.070790191" Feb 16 21:32:43.740380 master-0 kubenswrapper[38936]: I0216 21:32:43.737708 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c" podStartSLOduration=1.931484228 podStartE2EDuration="8.737681277s" podCreationTimestamp="2026-02-16 21:32:35 +0000 UTC" firstStartedPulling="2026-02-16 21:32:36.320626408 +0000 UTC m=+586.672629770" lastFinishedPulling="2026-02-16 21:32:43.126823457 +0000 UTC m=+593.478826819" observedRunningTime="2026-02-16 21:32:43.72794453 +0000 UTC m=+594.079947902" watchObservedRunningTime="2026-02-16 21:32:43.737681277 +0000 UTC m=+594.089684639" Feb 16 21:32:43.755163 master-0 kubenswrapper[38936]: I0216 21:32:43.755078 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" podStartSLOduration=1.6729123609999998 podStartE2EDuration="10.755060355s" podCreationTimestamp="2026-02-16 21:32:33 +0000 UTC" firstStartedPulling="2026-02-16 21:32:34.034639277 +0000 UTC m=+584.386642639" lastFinishedPulling="2026-02-16 21:32:43.116787271 +0000 UTC m=+593.468790633" observedRunningTime="2026-02-16 21:32:43.754509449 +0000 UTC m=+594.106512811" watchObservedRunningTime="2026-02-16 21:32:43.755060355 +0000 UTC m=+594.107063717" Feb 16 21:32:43.827752 master-0 kubenswrapper[38936]: I0216 21:32:43.827659 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" podStartSLOduration=1.9571090519999998 podStartE2EDuration="8.827628567s" podCreationTimestamp="2026-02-16 21:32:35 +0000 UTC" firstStartedPulling="2026-02-16 21:32:36.246882963 +0000 UTC m=+586.598886325" lastFinishedPulling="2026-02-16 21:32:43.117402478 +0000 UTC m=+593.469405840" observedRunningTime="2026-02-16 21:32:43.821602821 +0000 UTC m=+594.173606183" watchObservedRunningTime="2026-02-16 21:32:43.827628567 +0000 UTC m=+594.179631919" Feb 16 21:32:43.845081 master-0 kubenswrapper[38936]: I0216 21:32:43.843928 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j" podStartSLOduration=2.15195523 podStartE2EDuration="8.843902534s" podCreationTimestamp="2026-02-16 21:32:35 +0000 UTC" firstStartedPulling="2026-02-16 21:32:36.425818585 +0000 UTC m=+586.777821947" lastFinishedPulling="2026-02-16 21:32:43.117765859 +0000 UTC m=+593.469769251" observedRunningTime="2026-02-16 21:32:43.843120443 +0000 UTC m=+594.195123805" watchObservedRunningTime="2026-02-16 21:32:43.843902534 +0000 UTC m=+594.195905906" Feb 16 21:32:44.712013 master-0 kubenswrapper[38936]: I0216 21:32:44.711914 38936 generic.go:334] "Generic (PLEG): container finished" podID="e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7" containerID="7ed4596106d106e284f9ace5518f28288e13020efacfd43cf9deb1517df768b8" exitCode=0 Feb 16 21:32:44.712587 master-0 kubenswrapper[38936]: I0216 21:32:44.712035 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw88b" event={"ID":"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7","Type":"ContainerDied","Data":"7ed4596106d106e284f9ace5518f28288e13020efacfd43cf9deb1517df768b8"} Feb 16 21:32:45.117677 master-0 kubenswrapper[38936]: I0216 21:32:45.117570 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-t6g4d" Feb 16 21:32:45.736983 master-0 kubenswrapper[38936]: I0216 21:32:45.736749 38936 generic.go:334] "Generic (PLEG): container finished" podID="e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7" containerID="1246d9530deddfd0bccac810cd416b93d6796c32bb7c8479d3c5d63e13d7d038" exitCode=0 Feb 16 21:32:45.736983 master-0 kubenswrapper[38936]: I0216 21:32:45.736852 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw88b" event={"ID":"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7","Type":"ContainerDied","Data":"1246d9530deddfd0bccac810cd416b93d6796c32bb7c8479d3c5d63e13d7d038"} Feb 16 21:32:46.013009 master-0 kubenswrapper[38936]: I0216 21:32:46.012959 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:46.013009 master-0 kubenswrapper[38936]: I0216 21:32:46.013011 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:46.016794 master-0 kubenswrapper[38936]: I0216 21:32:46.016748 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:46.753144 master-0 kubenswrapper[38936]: I0216 21:32:46.752991 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw88b" event={"ID":"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7","Type":"ContainerStarted","Data":"5bf8add53951d0a5a5fb69d261244abaabbbcaf1e5a266ebb5bfddb3618c1dab"} Feb 16 21:32:46.753505 master-0 kubenswrapper[38936]: I0216 21:32:46.753160 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw88b" event={"ID":"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7","Type":"ContainerStarted","Data":"dfdf3cb00a69e39d96ef21148e9884af7ff4ff661d155b4c70dd067811133674"} Feb 16 21:32:46.753505 master-0 kubenswrapper[38936]: I0216 21:32:46.753210 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw88b" event={"ID":"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7","Type":"ContainerStarted","Data":"fbafbc36bcb181648ace9381f6f4fe9e745d18f61fd8225610b344e352629251"} Feb 16 21:32:46.753505 master-0 kubenswrapper[38936]: I0216 21:32:46.753227 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw88b" event={"ID":"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7","Type":"ContainerStarted","Data":"f5d83808b7e014eb6d9a48ac2852bdd71723d4d33bccd59d60174fc05fed08f2"} Feb 16 21:32:46.757106 master-0 kubenswrapper[38936]: I0216 21:32:46.757078 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f4ffb8c59-dzhgj" Feb 16 21:32:46.863078 master-0 kubenswrapper[38936]: I0216 21:32:46.862673 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67b7649c44-qv4gx"] Feb 16 21:32:47.777148 master-0 kubenswrapper[38936]: I0216 21:32:47.777067 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw88b" event={"ID":"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7","Type":"ContainerStarted","Data":"6181296924d7e5e829d2073f5e7a53f9f81cc9b9187402008876ab625a42bcb7"} Feb 16 21:32:47.777148 master-0 kubenswrapper[38936]: I0216 21:32:47.777160 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw88b" event={"ID":"e26156a7-6f5e-4a2e-b11b-9e2d1ba6f9f7","Type":"ContainerStarted","Data":"6211fdea95322810bb60bd1af65edd4673cc75e48c741c2b0d04cf09a86944ad"} Feb 16 21:32:47.778099 master-0 kubenswrapper[38936]: I0216 21:32:47.777426 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:47.818550 master-0 kubenswrapper[38936]: I0216 21:32:47.818409 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fw88b" podStartSLOduration=5.501485752 podStartE2EDuration="14.81838412s" podCreationTimestamp="2026-02-16 21:32:33 +0000 UTC" firstStartedPulling="2026-02-16 21:32:33.770486156 +0000 UTC m=+584.122489518" lastFinishedPulling="2026-02-16 21:32:43.087384524 +0000 UTC m=+593.439387886" observedRunningTime="2026-02-16 21:32:47.811370958 +0000 UTC m=+598.163374360" watchObservedRunningTime="2026-02-16 21:32:47.81838412 +0000 UTC m=+598.170387482" Feb 16 21:32:48.592810 master-0 kubenswrapper[38936]: I0216 21:32:48.592694 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:48.632736 master-0 kubenswrapper[38936]: I0216 21:32:48.632675 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fw88b" Feb 16 21:32:50.699885 master-0 kubenswrapper[38936]: I0216 21:32:50.699817 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vzqn2" Feb 16 21:32:53.597387 master-0 kubenswrapper[38936]: I0216 21:32:53.597294 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682" Feb 16 21:32:53.630572 master-0 kubenswrapper[38936]: I0216 21:32:53.630432 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-r5mh6" Feb 16 21:32:55.611519 master-0 kubenswrapper[38936]: I0216 21:32:55.611430 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b" Feb 16 21:33:00.308861 master-0 kubenswrapper[38936]: I0216 21:33:00.308791 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-8mz98"] Feb 16 21:33:00.310586 master-0 kubenswrapper[38936]: I0216 21:33:00.310539 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.315346 master-0 kubenswrapper[38936]: I0216 21:33:00.315281 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Feb 16 21:33:00.323358 master-0 kubenswrapper[38936]: I0216 21:33:00.323310 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-8mz98"] Feb 16 21:33:00.403098 master-0 kubenswrapper[38936]: I0216 21:33:00.403013 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-pod-volumes-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.403381 master-0 kubenswrapper[38936]: I0216 21:33:00.403316 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/30126f2f-1695-4fff-8c49-4503507cbf23-metrics-cert\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.403524 master-0 kubenswrapper[38936]: I0216 21:33:00.403495 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-csi-plugin-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.403562 master-0 kubenswrapper[38936]: I0216 21:33:00.403533 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-lvmd-config\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.403830 master-0 kubenswrapper[38936]: I0216 21:33:00.403709 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-registration-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.404010 master-0 kubenswrapper[38936]: I0216 21:33:00.403962 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-device-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.404134 master-0 kubenswrapper[38936]: I0216 21:33:00.404098 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-sys\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.404171 master-0 kubenswrapper[38936]: I0216 21:33:00.404152 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-run-udev\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.404463 master-0 kubenswrapper[38936]: I0216 21:33:00.404372 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvcqn\" (UniqueName: \"kubernetes.io/projected/30126f2f-1695-4fff-8c49-4503507cbf23-kube-api-access-kvcqn\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.404601 master-0 kubenswrapper[38936]: I0216 21:33:00.404571 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-node-plugin-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.404778 master-0 kubenswrapper[38936]: I0216 21:33:00.404748 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-file-lock-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.506844 master-0 kubenswrapper[38936]: I0216 21:33:00.506738 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-pod-volumes-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.506844 master-0 kubenswrapper[38936]: I0216 21:33:00.506853 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/30126f2f-1695-4fff-8c49-4503507cbf23-metrics-cert\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507191 master-0 kubenswrapper[38936]: I0216 21:33:00.506889 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-csi-plugin-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507191 master-0 kubenswrapper[38936]: I0216 21:33:00.506909 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-lvmd-config\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507191 master-0 kubenswrapper[38936]: I0216 21:33:00.506940 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-registration-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507191 master-0 kubenswrapper[38936]: I0216 21:33:00.506949 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-pod-volumes-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507191 master-0 kubenswrapper[38936]: I0216 21:33:00.506968 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-device-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507191 master-0 kubenswrapper[38936]: I0216 21:33:00.507067 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-sys\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507191 master-0 kubenswrapper[38936]: I0216 21:33:00.507100 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-run-udev\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507191 master-0 kubenswrapper[38936]: I0216 21:33:00.507200 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvcqn\" (UniqueName: \"kubernetes.io/projected/30126f2f-1695-4fff-8c49-4503507cbf23-kube-api-access-kvcqn\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507746 master-0 kubenswrapper[38936]: I0216 21:33:00.507211 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-device-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507746 master-0 kubenswrapper[38936]: I0216 21:33:00.507240 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-run-udev\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507746 master-0 kubenswrapper[38936]: I0216 21:33:00.507286 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-node-plugin-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507746 master-0 kubenswrapper[38936]: I0216 21:33:00.507379 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-file-lock-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507746 master-0 kubenswrapper[38936]: I0216 21:33:00.507480 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-lvmd-config\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507746 master-0 kubenswrapper[38936]: I0216 21:33:00.507566 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-registration-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507746 master-0 kubenswrapper[38936]: I0216 21:33:00.507201 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-sys\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.507746 master-0 kubenswrapper[38936]: I0216 21:33:00.507695 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-csi-plugin-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.508192 master-0 kubenswrapper[38936]: I0216 21:33:00.507998 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-file-lock-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.508192 master-0 kubenswrapper[38936]: I0216 21:33:00.508141 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/30126f2f-1695-4fff-8c49-4503507cbf23-node-plugin-dir\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.510882 master-0 kubenswrapper[38936]: I0216 21:33:00.510800 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/30126f2f-1695-4fff-8c49-4503507cbf23-metrics-cert\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.529557 master-0 kubenswrapper[38936]: I0216 21:33:00.529480 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvcqn\" (UniqueName: \"kubernetes.io/projected/30126f2f-1695-4fff-8c49-4503507cbf23-kube-api-access-kvcqn\") pod \"vg-manager-8mz98\" (UID: \"30126f2f-1695-4fff-8c49-4503507cbf23\") " pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:00.653269 master-0 kubenswrapper[38936]: I0216 21:33:00.653087 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:01.326210 master-0 kubenswrapper[38936]: I0216 21:33:01.326140 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-8mz98"] Feb 16 21:33:01.937186 master-0 kubenswrapper[38936]: I0216 21:33:01.935296 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8mz98" event={"ID":"30126f2f-1695-4fff-8c49-4503507cbf23","Type":"ContainerStarted","Data":"8d09816e57f2b81d20920b29d843dabce4cf91243e868cb58b30714642784e06"} Feb 16 21:33:01.937186 master-0 kubenswrapper[38936]: I0216 21:33:01.935373 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8mz98" event={"ID":"30126f2f-1695-4fff-8c49-4503507cbf23","Type":"ContainerStarted","Data":"382a1e2a2c5506c81453163f622d712e3edb4279c1d4788676023747f62d5e39"} Feb 16 21:33:03.596967 master-0 kubenswrapper[38936]: I0216 21:33:03.596886 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fw88b" Feb 16 21:33:03.642511 master-0 kubenswrapper[38936]: I0216 21:33:03.639571 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-8mz98" podStartSLOduration=3.639550115 podStartE2EDuration="3.639550115s" podCreationTimestamp="2026-02-16 21:33:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:33:01.997156304 +0000 UTC m=+612.349159666" watchObservedRunningTime="2026-02-16 21:33:03.639550115 +0000 UTC m=+613.991553477" Feb 16 21:33:03.962729 master-0 kubenswrapper[38936]: I0216 21:33:03.962617 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-8mz98_30126f2f-1695-4fff-8c49-4503507cbf23/vg-manager/0.log" Feb 16 21:33:03.962729 master-0 kubenswrapper[38936]: I0216 21:33:03.962724 38936 generic.go:334] "Generic (PLEG): container finished" podID="30126f2f-1695-4fff-8c49-4503507cbf23" containerID="8d09816e57f2b81d20920b29d843dabce4cf91243e868cb58b30714642784e06" exitCode=1 Feb 16 21:33:03.963439 master-0 kubenswrapper[38936]: I0216 21:33:03.962782 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8mz98" event={"ID":"30126f2f-1695-4fff-8c49-4503507cbf23","Type":"ContainerDied","Data":"8d09816e57f2b81d20920b29d843dabce4cf91243e868cb58b30714642784e06"} Feb 16 21:33:03.968909 master-0 kubenswrapper[38936]: I0216 21:33:03.965785 38936 scope.go:117] "RemoveContainer" containerID="8d09816e57f2b81d20920b29d843dabce4cf91243e868cb58b30714642784e06" Feb 16 21:33:04.348857 master-0 kubenswrapper[38936]: I0216 21:33:04.348800 38936 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Feb 16 21:33:04.973361 master-0 kubenswrapper[38936]: I0216 21:33:04.972957 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-8mz98_30126f2f-1695-4fff-8c49-4503507cbf23/vg-manager/0.log" Feb 16 21:33:04.973361 master-0 kubenswrapper[38936]: I0216 21:33:04.973021 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-8mz98" event={"ID":"30126f2f-1695-4fff-8c49-4503507cbf23","Type":"ContainerStarted","Data":"3e011d6d7533bd83a496d4c3e26d8cf4414158d6660f137b1d32f96c63e95be7"} Feb 16 21:33:05.263280 master-0 kubenswrapper[38936]: I0216 21:33:05.263131 38936 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-02-16T21:33:04.348835687Z","Handler":null,"Name":""} Feb 16 21:33:05.266125 master-0 kubenswrapper[38936]: I0216 21:33:05.266089 38936 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Feb 16 21:33:05.266202 master-0 kubenswrapper[38936]: I0216 21:33:05.266193 38936 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Feb 16 21:33:10.130034 master-0 kubenswrapper[38936]: I0216 21:33:10.129963 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vmzf6"] Feb 16 21:33:10.131857 master-0 kubenswrapper[38936]: I0216 21:33:10.131820 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vmzf6" Feb 16 21:33:10.135189 master-0 kubenswrapper[38936]: I0216 21:33:10.134671 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 21:33:10.137045 master-0 kubenswrapper[38936]: I0216 21:33:10.136950 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 21:33:10.146500 master-0 kubenswrapper[38936]: I0216 21:33:10.146358 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vmzf6"] Feb 16 21:33:10.212500 master-0 kubenswrapper[38936]: I0216 21:33:10.212417 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcs2s\" (UniqueName: \"kubernetes.io/projected/54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c-kube-api-access-hcs2s\") pod \"openstack-operator-index-vmzf6\" (UID: \"54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c\") " pod="openstack-operators/openstack-operator-index-vmzf6" Feb 16 21:33:10.329738 master-0 kubenswrapper[38936]: I0216 21:33:10.329670 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcs2s\" (UniqueName: \"kubernetes.io/projected/54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c-kube-api-access-hcs2s\") pod \"openstack-operator-index-vmzf6\" (UID: \"54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c\") " pod="openstack-operators/openstack-operator-index-vmzf6" Feb 16 21:33:10.347060 master-0 kubenswrapper[38936]: I0216 21:33:10.347000 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcs2s\" (UniqueName: \"kubernetes.io/projected/54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c-kube-api-access-hcs2s\") pod \"openstack-operator-index-vmzf6\" (UID: \"54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c\") " pod="openstack-operators/openstack-operator-index-vmzf6" Feb 16 21:33:10.474887 master-0 kubenswrapper[38936]: I0216 21:33:10.474737 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vmzf6" Feb 16 21:33:10.653566 master-0 kubenswrapper[38936]: I0216 21:33:10.653467 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:10.656659 master-0 kubenswrapper[38936]: I0216 21:33:10.656566 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:10.909309 master-0 kubenswrapper[38936]: I0216 21:33:10.907786 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vmzf6"] Feb 16 21:33:10.913737 master-0 kubenswrapper[38936]: W0216 21:33:10.913672 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54adbc23_3ef2_4a3d_8db3_7e50f1d1ea3c.slice/crio-79b8dcfcea9fbaa8654c3c6c775164473819b9c5ddabe6c3192fa750decb1a25 WatchSource:0}: Error finding container 79b8dcfcea9fbaa8654c3c6c775164473819b9c5ddabe6c3192fa750decb1a25: Status 404 returned error can't find the container with id 79b8dcfcea9fbaa8654c3c6c775164473819b9c5ddabe6c3192fa750decb1a25 Feb 16 21:33:11.048182 master-0 kubenswrapper[38936]: I0216 21:33:11.048124 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vmzf6" event={"ID":"54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c","Type":"ContainerStarted","Data":"79b8dcfcea9fbaa8654c3c6c775164473819b9c5ddabe6c3192fa750decb1a25"} Feb 16 21:33:11.048509 master-0 kubenswrapper[38936]: I0216 21:33:11.048485 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:11.050801 master-0 kubenswrapper[38936]: I0216 21:33:11.050602 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-8mz98" Feb 16 21:33:11.902388 master-0 kubenswrapper[38936]: I0216 21:33:11.902269 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-67b7649c44-qv4gx" podUID="4a5c39e0-b7fc-49c3-b662-451027f68ab8" containerName="console" containerID="cri-o://cdbc65c1c9d28a230556f90a8929e6e290f5eba7a4cf63fbc814e7b3fc06b31a" gracePeriod=15 Feb 16 21:33:12.057152 master-0 kubenswrapper[38936]: I0216 21:33:12.057102 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67b7649c44-qv4gx_4a5c39e0-b7fc-49c3-b662-451027f68ab8/console/0.log" Feb 16 21:33:12.057368 master-0 kubenswrapper[38936]: I0216 21:33:12.057166 38936 generic.go:334] "Generic (PLEG): container finished" podID="4a5c39e0-b7fc-49c3-b662-451027f68ab8" containerID="cdbc65c1c9d28a230556f90a8929e6e290f5eba7a4cf63fbc814e7b3fc06b31a" exitCode=2 Feb 16 21:33:12.058684 master-0 kubenswrapper[38936]: I0216 21:33:12.058201 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67b7649c44-qv4gx" event={"ID":"4a5c39e0-b7fc-49c3-b662-451027f68ab8","Type":"ContainerDied","Data":"cdbc65c1c9d28a230556f90a8929e6e290f5eba7a4cf63fbc814e7b3fc06b31a"} Feb 16 21:33:12.398036 master-0 kubenswrapper[38936]: I0216 21:33:12.397924 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67b7649c44-qv4gx_4a5c39e0-b7fc-49c3-b662-451027f68ab8/console/0.log" Feb 16 21:33:12.398036 master-0 kubenswrapper[38936]: I0216 21:33:12.398014 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:33:12.471925 master-0 kubenswrapper[38936]: I0216 21:33:12.471853 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-oauth-serving-cert\") pod \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " Feb 16 21:33:12.471925 master-0 kubenswrapper[38936]: I0216 21:33:12.471925 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-service-ca\") pod \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " Feb 16 21:33:12.472178 master-0 kubenswrapper[38936]: I0216 21:33:12.471946 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-oauth-config\") pod \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " Feb 16 21:33:12.472178 master-0 kubenswrapper[38936]: I0216 21:33:12.472007 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-config\") pod \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " Feb 16 21:33:12.472178 master-0 kubenswrapper[38936]: I0216 21:33:12.472073 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-serving-cert\") pod \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " Feb 16 21:33:12.472343 master-0 kubenswrapper[38936]: I0216 21:33:12.472183 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-trusted-ca-bundle\") pod \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " Feb 16 21:33:12.472615 master-0 kubenswrapper[38936]: I0216 21:33:12.472218 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc5kv\" (UniqueName: \"kubernetes.io/projected/4a5c39e0-b7fc-49c3-b662-451027f68ab8-kube-api-access-gc5kv\") pod \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\" (UID: \"4a5c39e0-b7fc-49c3-b662-451027f68ab8\") " Feb 16 21:33:12.472700 master-0 kubenswrapper[38936]: I0216 21:33:12.472665 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4a5c39e0-b7fc-49c3-b662-451027f68ab8" (UID: "4a5c39e0-b7fc-49c3-b662-451027f68ab8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:33:12.472752 master-0 kubenswrapper[38936]: I0216 21:33:12.472725 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-config" (OuterVolumeSpecName: "console-config") pod "4a5c39e0-b7fc-49c3-b662-451027f68ab8" (UID: "4a5c39e0-b7fc-49c3-b662-451027f68ab8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:33:12.472827 master-0 kubenswrapper[38936]: I0216 21:33:12.472774 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-service-ca" (OuterVolumeSpecName: "service-ca") pod "4a5c39e0-b7fc-49c3-b662-451027f68ab8" (UID: "4a5c39e0-b7fc-49c3-b662-451027f68ab8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:33:12.472908 master-0 kubenswrapper[38936]: I0216 21:33:12.472879 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4a5c39e0-b7fc-49c3-b662-451027f68ab8" (UID: "4a5c39e0-b7fc-49c3-b662-451027f68ab8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:33:12.473281 master-0 kubenswrapper[38936]: I0216 21:33:12.473223 38936 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:33:12.473281 master-0 kubenswrapper[38936]: I0216 21:33:12.473273 38936 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 21:33:12.473281 master-0 kubenswrapper[38936]: I0216 21:33:12.473286 38936 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:33:12.473607 master-0 kubenswrapper[38936]: I0216 21:33:12.473298 38936 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a5c39e0-b7fc-49c3-b662-451027f68ab8-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:33:12.475524 master-0 kubenswrapper[38936]: I0216 21:33:12.475458 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a5c39e0-b7fc-49c3-b662-451027f68ab8-kube-api-access-gc5kv" (OuterVolumeSpecName: "kube-api-access-gc5kv") pod "4a5c39e0-b7fc-49c3-b662-451027f68ab8" (UID: "4a5c39e0-b7fc-49c3-b662-451027f68ab8"). InnerVolumeSpecName "kube-api-access-gc5kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:33:12.475726 master-0 kubenswrapper[38936]: I0216 21:33:12.475682 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4a5c39e0-b7fc-49c3-b662-451027f68ab8" (UID: "4a5c39e0-b7fc-49c3-b662-451027f68ab8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:33:12.476595 master-0 kubenswrapper[38936]: I0216 21:33:12.476515 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4a5c39e0-b7fc-49c3-b662-451027f68ab8" (UID: "4a5c39e0-b7fc-49c3-b662-451027f68ab8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:33:12.575295 master-0 kubenswrapper[38936]: I0216 21:33:12.575230 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc5kv\" (UniqueName: \"kubernetes.io/projected/4a5c39e0-b7fc-49c3-b662-451027f68ab8-kube-api-access-gc5kv\") on node \"master-0\" DevicePath \"\"" Feb 16 21:33:12.575295 master-0 kubenswrapper[38936]: I0216 21:33:12.575283 38936 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:33:12.575295 master-0 kubenswrapper[38936]: I0216 21:33:12.575295 38936 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a5c39e0-b7fc-49c3-b662-451027f68ab8-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 21:33:13.068934 master-0 kubenswrapper[38936]: I0216 21:33:13.068883 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67b7649c44-qv4gx_4a5c39e0-b7fc-49c3-b662-451027f68ab8/console/0.log" Feb 16 21:33:13.069456 master-0 kubenswrapper[38936]: I0216 21:33:13.069053 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67b7649c44-qv4gx" Feb 16 21:33:13.069456 master-0 kubenswrapper[38936]: I0216 21:33:13.069206 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67b7649c44-qv4gx" event={"ID":"4a5c39e0-b7fc-49c3-b662-451027f68ab8","Type":"ContainerDied","Data":"e9a3370419775c754ea2ac9716b520ed89a4438e3ca569cb6ef90e5e185628c5"} Feb 16 21:33:13.069456 master-0 kubenswrapper[38936]: I0216 21:33:13.069441 38936 scope.go:117] "RemoveContainer" containerID="cdbc65c1c9d28a230556f90a8929e6e290f5eba7a4cf63fbc814e7b3fc06b31a" Feb 16 21:33:13.072459 master-0 kubenswrapper[38936]: I0216 21:33:13.072356 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vmzf6" event={"ID":"54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c","Type":"ContainerStarted","Data":"f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835"} Feb 16 21:33:13.116760 master-0 kubenswrapper[38936]: I0216 21:33:13.116374 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vmzf6" podStartSLOduration=2.206739142 podStartE2EDuration="3.116357464s" podCreationTimestamp="2026-02-16 21:33:10 +0000 UTC" firstStartedPulling="2026-02-16 21:33:10.916053826 +0000 UTC m=+621.268057188" lastFinishedPulling="2026-02-16 21:33:11.825672148 +0000 UTC m=+622.177675510" observedRunningTime="2026-02-16 21:33:13.114282587 +0000 UTC m=+623.466285969" watchObservedRunningTime="2026-02-16 21:33:13.116357464 +0000 UTC m=+623.468360826" Feb 16 21:33:13.139832 master-0 kubenswrapper[38936]: I0216 21:33:13.139763 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67b7649c44-qv4gx"] Feb 16 21:33:13.148051 master-0 kubenswrapper[38936]: I0216 21:33:13.147981 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-67b7649c44-qv4gx"] Feb 16 21:33:13.240131 master-0 kubenswrapper[38936]: E0216 21:33:13.240045 38936 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a5c39e0_b7fc_49c3_b662_451027f68ab8.slice/crio-e9a3370419775c754ea2ac9716b520ed89a4438e3ca569cb6ef90e5e185628c5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a5c39e0_b7fc_49c3_b662_451027f68ab8.slice\": RecentStats: unable to find data in memory cache]" Feb 16 21:33:13.886353 master-0 kubenswrapper[38936]: I0216 21:33:13.886292 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a5c39e0-b7fc-49c3-b662-451027f68ab8" path="/var/lib/kubelet/pods/4a5c39e0-b7fc-49c3-b662-451027f68ab8/volumes" Feb 16 21:33:14.078023 master-0 kubenswrapper[38936]: I0216 21:33:14.077924 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vmzf6"] Feb 16 21:33:14.679235 master-0 kubenswrapper[38936]: I0216 21:33:14.679175 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rmjhw"] Feb 16 21:33:14.679606 master-0 kubenswrapper[38936]: E0216 21:33:14.679578 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5c39e0-b7fc-49c3-b662-451027f68ab8" containerName="console" Feb 16 21:33:14.679606 master-0 kubenswrapper[38936]: I0216 21:33:14.679595 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5c39e0-b7fc-49c3-b662-451027f68ab8" containerName="console" Feb 16 21:33:14.679857 master-0 kubenswrapper[38936]: I0216 21:33:14.679831 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5c39e0-b7fc-49c3-b662-451027f68ab8" containerName="console" Feb 16 21:33:14.680640 master-0 kubenswrapper[38936]: I0216 21:33:14.680619 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rmjhw" Feb 16 21:33:14.693040 master-0 kubenswrapper[38936]: I0216 21:33:14.692978 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rmjhw"] Feb 16 21:33:14.813697 master-0 kubenswrapper[38936]: I0216 21:33:14.813628 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7psjc\" (UniqueName: \"kubernetes.io/projected/56ea025c-b007-4d63-9cc7-9c754a496fb9-kube-api-access-7psjc\") pod \"openstack-operator-index-rmjhw\" (UID: \"56ea025c-b007-4d63-9cc7-9c754a496fb9\") " pod="openstack-operators/openstack-operator-index-rmjhw" Feb 16 21:33:14.915530 master-0 kubenswrapper[38936]: I0216 21:33:14.915472 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7psjc\" (UniqueName: \"kubernetes.io/projected/56ea025c-b007-4d63-9cc7-9c754a496fb9-kube-api-access-7psjc\") pod \"openstack-operator-index-rmjhw\" (UID: \"56ea025c-b007-4d63-9cc7-9c754a496fb9\") " pod="openstack-operators/openstack-operator-index-rmjhw" Feb 16 21:33:14.932197 master-0 kubenswrapper[38936]: I0216 21:33:14.932087 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7psjc\" (UniqueName: \"kubernetes.io/projected/56ea025c-b007-4d63-9cc7-9c754a496fb9-kube-api-access-7psjc\") pod \"openstack-operator-index-rmjhw\" (UID: \"56ea025c-b007-4d63-9cc7-9c754a496fb9\") " pod="openstack-operators/openstack-operator-index-rmjhw" Feb 16 21:33:14.999614 master-0 kubenswrapper[38936]: I0216 21:33:14.999535 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rmjhw" Feb 16 21:33:15.097192 master-0 kubenswrapper[38936]: I0216 21:33:15.095271 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vmzf6" podUID="54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c" containerName="registry-server" containerID="cri-o://f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835" gracePeriod=2 Feb 16 21:33:15.407833 master-0 kubenswrapper[38936]: I0216 21:33:15.407343 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rmjhw"] Feb 16 21:33:15.511265 master-0 kubenswrapper[38936]: I0216 21:33:15.511224 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vmzf6" Feb 16 21:33:15.630179 master-0 kubenswrapper[38936]: I0216 21:33:15.630108 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcs2s\" (UniqueName: \"kubernetes.io/projected/54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c-kube-api-access-hcs2s\") pod \"54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c\" (UID: \"54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c\") " Feb 16 21:33:15.633194 master-0 kubenswrapper[38936]: I0216 21:33:15.633131 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c-kube-api-access-hcs2s" (OuterVolumeSpecName: "kube-api-access-hcs2s") pod "54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c" (UID: "54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c"). InnerVolumeSpecName "kube-api-access-hcs2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:33:15.732613 master-0 kubenswrapper[38936]: I0216 21:33:15.732133 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcs2s\" (UniqueName: \"kubernetes.io/projected/54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c-kube-api-access-hcs2s\") on node \"master-0\" DevicePath \"\"" Feb 16 21:33:16.109759 master-0 kubenswrapper[38936]: I0216 21:33:16.109680 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rmjhw" event={"ID":"56ea025c-b007-4d63-9cc7-9c754a496fb9","Type":"ContainerStarted","Data":"49360bdd3d611fab83c69565cbb4a1d232382e60765c759921624d5b272e1a55"} Feb 16 21:33:16.119171 master-0 kubenswrapper[38936]: I0216 21:33:16.111529 38936 generic.go:334] "Generic (PLEG): container finished" podID="54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c" containerID="f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835" exitCode=0 Feb 16 21:33:16.119171 master-0 kubenswrapper[38936]: I0216 21:33:16.111602 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vmzf6" event={"ID":"54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c","Type":"ContainerDied","Data":"f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835"} Feb 16 21:33:16.119171 master-0 kubenswrapper[38936]: I0216 21:33:16.111663 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vmzf6" Feb 16 21:33:16.119171 master-0 kubenswrapper[38936]: I0216 21:33:16.111703 38936 scope.go:117] "RemoveContainer" containerID="f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835" Feb 16 21:33:16.119171 master-0 kubenswrapper[38936]: I0216 21:33:16.111685 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vmzf6" event={"ID":"54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c","Type":"ContainerDied","Data":"79b8dcfcea9fbaa8654c3c6c775164473819b9c5ddabe6c3192fa750decb1a25"} Feb 16 21:33:16.139159 master-0 kubenswrapper[38936]: I0216 21:33:16.139090 38936 scope.go:117] "RemoveContainer" containerID="f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835" Feb 16 21:33:16.139710 master-0 kubenswrapper[38936]: E0216 21:33:16.139659 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835\": container with ID starting with f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835 not found: ID does not exist" containerID="f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835" Feb 16 21:33:16.140027 master-0 kubenswrapper[38936]: I0216 21:33:16.139732 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835"} err="failed to get container status \"f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835\": rpc error: code = NotFound desc = could not find container \"f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835\": container with ID starting with f00287365995d572f4427ad2a3cd1dcbe0d7c7b3b662bb69579edd7ae627c835 not found: ID does not exist" Feb 16 21:33:16.161330 master-0 kubenswrapper[38936]: I0216 21:33:16.161075 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vmzf6"] Feb 16 21:33:16.171975 master-0 kubenswrapper[38936]: I0216 21:33:16.171875 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vmzf6"] Feb 16 21:33:17.127513 master-0 kubenswrapper[38936]: I0216 21:33:17.127423 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rmjhw" event={"ID":"56ea025c-b007-4d63-9cc7-9c754a496fb9","Type":"ContainerStarted","Data":"0fe60ff3a36a2a41e491093761faaf139da8468ae79c51cd9ad6b9f652674d7c"} Feb 16 21:33:17.165276 master-0 kubenswrapper[38936]: I0216 21:33:17.164873 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rmjhw" podStartSLOduration=2.609909637 podStartE2EDuration="3.164439271s" podCreationTimestamp="2026-02-16 21:33:14 +0000 UTC" firstStartedPulling="2026-02-16 21:33:15.420352579 +0000 UTC m=+625.772355941" lastFinishedPulling="2026-02-16 21:33:15.974882193 +0000 UTC m=+626.326885575" observedRunningTime="2026-02-16 21:33:17.149162181 +0000 UTC m=+627.501165553" watchObservedRunningTime="2026-02-16 21:33:17.164439271 +0000 UTC m=+627.516442723" Feb 16 21:33:17.886741 master-0 kubenswrapper[38936]: I0216 21:33:17.886637 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c" path="/var/lib/kubelet/pods/54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c/volumes" Feb 16 21:33:24.999907 master-0 kubenswrapper[38936]: I0216 21:33:24.999850 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rmjhw" Feb 16 21:33:25.000520 master-0 kubenswrapper[38936]: I0216 21:33:25.000110 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rmjhw" Feb 16 21:33:25.033218 master-0 kubenswrapper[38936]: I0216 21:33:25.033137 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rmjhw" Feb 16 21:33:25.234188 master-0 kubenswrapper[38936]: I0216 21:33:25.234129 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rmjhw" Feb 16 21:33:26.927929 master-0 kubenswrapper[38936]: I0216 21:33:26.927807 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc"] Feb 16 21:33:26.928686 master-0 kubenswrapper[38936]: E0216 21:33:26.928370 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c" containerName="registry-server" Feb 16 21:33:26.928686 master-0 kubenswrapper[38936]: I0216 21:33:26.928390 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c" containerName="registry-server" Feb 16 21:33:26.928771 master-0 kubenswrapper[38936]: I0216 21:33:26.928705 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="54adbc23-3ef2-4a3d-8db3-7e50f1d1ea3c" containerName="registry-server" Feb 16 21:33:26.930358 master-0 kubenswrapper[38936]: I0216 21:33:26.930322 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:26.947133 master-0 kubenswrapper[38936]: I0216 21:33:26.947073 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc"] Feb 16 21:33:27.074797 master-0 kubenswrapper[38936]: I0216 21:33:27.074713 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2652\" (UniqueName: \"kubernetes.io/projected/3356068a-3a22-4b43-a706-8801f913ea20-kube-api-access-m2652\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:27.075097 master-0 kubenswrapper[38936]: I0216 21:33:27.074818 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:27.075097 master-0 kubenswrapper[38936]: I0216 21:33:27.075021 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:27.176335 master-0 kubenswrapper[38936]: I0216 21:33:27.176273 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2652\" (UniqueName: \"kubernetes.io/projected/3356068a-3a22-4b43-a706-8801f913ea20-kube-api-access-m2652\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:27.176335 master-0 kubenswrapper[38936]: I0216 21:33:27.176340 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:27.176629 master-0 kubenswrapper[38936]: I0216 21:33:27.176431 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:27.176899 master-0 kubenswrapper[38936]: I0216 21:33:27.176875 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:27.176949 master-0 kubenswrapper[38936]: I0216 21:33:27.176918 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:27.197993 master-0 kubenswrapper[38936]: I0216 21:33:27.197874 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2652\" (UniqueName: \"kubernetes.io/projected/3356068a-3a22-4b43-a706-8801f913ea20-kube-api-access-m2652\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:27.249828 master-0 kubenswrapper[38936]: I0216 21:33:27.249762 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:27.792582 master-0 kubenswrapper[38936]: I0216 21:33:27.792524 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc"] Feb 16 21:33:27.794075 master-0 kubenswrapper[38936]: W0216 21:33:27.794012 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3356068a_3a22_4b43_a706_8801f913ea20.slice/crio-32f5571c86212a619c51ee25893fb1cf4d4245331ab9d15876f00f01812e6aad WatchSource:0}: Error finding container 32f5571c86212a619c51ee25893fb1cf4d4245331ab9d15876f00f01812e6aad: Status 404 returned error can't find the container with id 32f5571c86212a619c51ee25893fb1cf4d4245331ab9d15876f00f01812e6aad Feb 16 21:33:28.234012 master-0 kubenswrapper[38936]: I0216 21:33:28.233860 38936 generic.go:334] "Generic (PLEG): container finished" podID="3356068a-3a22-4b43-a706-8801f913ea20" containerID="8c80a7dcf87c45a7cf1c0fc27f559a7b89bfd6e3f0c70043e9e4542f342a48b6" exitCode=0 Feb 16 21:33:28.234012 master-0 kubenswrapper[38936]: I0216 21:33:28.233919 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" event={"ID":"3356068a-3a22-4b43-a706-8801f913ea20","Type":"ContainerDied","Data":"8c80a7dcf87c45a7cf1c0fc27f559a7b89bfd6e3f0c70043e9e4542f342a48b6"} Feb 16 21:33:28.234012 master-0 kubenswrapper[38936]: I0216 21:33:28.233947 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" event={"ID":"3356068a-3a22-4b43-a706-8801f913ea20","Type":"ContainerStarted","Data":"32f5571c86212a619c51ee25893fb1cf4d4245331ab9d15876f00f01812e6aad"} Feb 16 21:33:29.244751 master-0 kubenswrapper[38936]: I0216 21:33:29.244580 38936 generic.go:334] "Generic (PLEG): container finished" podID="3356068a-3a22-4b43-a706-8801f913ea20" containerID="a5808717b33882c868e7fdbf57a98b1132228d599986b540613583b8c8523096" exitCode=0 Feb 16 21:33:29.244751 master-0 kubenswrapper[38936]: I0216 21:33:29.244668 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" event={"ID":"3356068a-3a22-4b43-a706-8801f913ea20","Type":"ContainerDied","Data":"a5808717b33882c868e7fdbf57a98b1132228d599986b540613583b8c8523096"} Feb 16 21:33:30.255849 master-0 kubenswrapper[38936]: I0216 21:33:30.255769 38936 generic.go:334] "Generic (PLEG): container finished" podID="3356068a-3a22-4b43-a706-8801f913ea20" containerID="4670b14e8ba1e954ff65d158dc2c67c259820c1b4f01aa74e38ccebf1b318278" exitCode=0 Feb 16 21:33:30.255849 master-0 kubenswrapper[38936]: I0216 21:33:30.255830 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" event={"ID":"3356068a-3a22-4b43-a706-8801f913ea20","Type":"ContainerDied","Data":"4670b14e8ba1e954ff65d158dc2c67c259820c1b4f01aa74e38ccebf1b318278"} Feb 16 21:33:31.645627 master-0 kubenswrapper[38936]: I0216 21:33:31.645563 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:31.764134 master-0 kubenswrapper[38936]: I0216 21:33:31.764066 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2652\" (UniqueName: \"kubernetes.io/projected/3356068a-3a22-4b43-a706-8801f913ea20-kube-api-access-m2652\") pod \"3356068a-3a22-4b43-a706-8801f913ea20\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " Feb 16 21:33:31.764392 master-0 kubenswrapper[38936]: I0216 21:33:31.764180 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-bundle\") pod \"3356068a-3a22-4b43-a706-8801f913ea20\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " Feb 16 21:33:31.764392 master-0 kubenswrapper[38936]: I0216 21:33:31.764229 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-util\") pod \"3356068a-3a22-4b43-a706-8801f913ea20\" (UID: \"3356068a-3a22-4b43-a706-8801f913ea20\") " Feb 16 21:33:31.765635 master-0 kubenswrapper[38936]: I0216 21:33:31.765381 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-bundle" (OuterVolumeSpecName: "bundle") pod "3356068a-3a22-4b43-a706-8801f913ea20" (UID: "3356068a-3a22-4b43-a706-8801f913ea20"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:33:31.767322 master-0 kubenswrapper[38936]: I0216 21:33:31.767239 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3356068a-3a22-4b43-a706-8801f913ea20-kube-api-access-m2652" (OuterVolumeSpecName: "kube-api-access-m2652") pod "3356068a-3a22-4b43-a706-8801f913ea20" (UID: "3356068a-3a22-4b43-a706-8801f913ea20"). InnerVolumeSpecName "kube-api-access-m2652". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:33:31.781853 master-0 kubenswrapper[38936]: I0216 21:33:31.781790 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-util" (OuterVolumeSpecName: "util") pod "3356068a-3a22-4b43-a706-8801f913ea20" (UID: "3356068a-3a22-4b43-a706-8801f913ea20"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:33:31.866359 master-0 kubenswrapper[38936]: I0216 21:33:31.866215 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2652\" (UniqueName: \"kubernetes.io/projected/3356068a-3a22-4b43-a706-8801f913ea20-kube-api-access-m2652\") on node \"master-0\" DevicePath \"\"" Feb 16 21:33:31.866359 master-0 kubenswrapper[38936]: I0216 21:33:31.866280 38936 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:33:31.866359 master-0 kubenswrapper[38936]: I0216 21:33:31.866291 38936 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3356068a-3a22-4b43-a706-8801f913ea20-util\") on node \"master-0\" DevicePath \"\"" Feb 16 21:33:32.276211 master-0 kubenswrapper[38936]: I0216 21:33:32.276143 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" event={"ID":"3356068a-3a22-4b43-a706-8801f913ea20","Type":"ContainerDied","Data":"32f5571c86212a619c51ee25893fb1cf4d4245331ab9d15876f00f01812e6aad"} Feb 16 21:33:32.276211 master-0 kubenswrapper[38936]: I0216 21:33:32.276198 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f5571c86212a619c51ee25893fb1cf4d4245331ab9d15876f00f01812e6aad" Feb 16 21:33:32.276498 master-0 kubenswrapper[38936]: I0216 21:33:32.276268 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc" Feb 16 21:33:40.145136 master-0 kubenswrapper[38936]: I0216 21:33:40.144976 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4"] Feb 16 21:33:40.146174 master-0 kubenswrapper[38936]: E0216 21:33:40.145447 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3356068a-3a22-4b43-a706-8801f913ea20" containerName="util" Feb 16 21:33:40.146174 master-0 kubenswrapper[38936]: I0216 21:33:40.145467 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="3356068a-3a22-4b43-a706-8801f913ea20" containerName="util" Feb 16 21:33:40.146174 master-0 kubenswrapper[38936]: E0216 21:33:40.145488 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3356068a-3a22-4b43-a706-8801f913ea20" containerName="extract" Feb 16 21:33:40.146174 master-0 kubenswrapper[38936]: I0216 21:33:40.145496 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="3356068a-3a22-4b43-a706-8801f913ea20" containerName="extract" Feb 16 21:33:40.146174 master-0 kubenswrapper[38936]: E0216 21:33:40.145540 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3356068a-3a22-4b43-a706-8801f913ea20" containerName="pull" Feb 16 21:33:40.146174 master-0 kubenswrapper[38936]: I0216 21:33:40.145550 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="3356068a-3a22-4b43-a706-8801f913ea20" containerName="pull" Feb 16 21:33:40.146859 master-0 kubenswrapper[38936]: I0216 21:33:40.146825 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="3356068a-3a22-4b43-a706-8801f913ea20" containerName="extract" Feb 16 21:33:40.147610 master-0 kubenswrapper[38936]: I0216 21:33:40.147581 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4" Feb 16 21:33:40.181752 master-0 kubenswrapper[38936]: I0216 21:33:40.181672 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4"] Feb 16 21:33:40.222542 master-0 kubenswrapper[38936]: I0216 21:33:40.219730 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmpcx\" (UniqueName: \"kubernetes.io/projected/5b590e5f-8fe9-4121-805c-8ed9ab5d8686-kube-api-access-tmpcx\") pod \"openstack-operator-controller-init-7f8db498b4-xs9l4\" (UID: \"5b590e5f-8fe9-4121-805c-8ed9ab5d8686\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4" Feb 16 21:33:40.321695 master-0 kubenswrapper[38936]: I0216 21:33:40.321599 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmpcx\" (UniqueName: \"kubernetes.io/projected/5b590e5f-8fe9-4121-805c-8ed9ab5d8686-kube-api-access-tmpcx\") pod \"openstack-operator-controller-init-7f8db498b4-xs9l4\" (UID: \"5b590e5f-8fe9-4121-805c-8ed9ab5d8686\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4" Feb 16 21:33:40.345121 master-0 kubenswrapper[38936]: I0216 21:33:40.345065 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmpcx\" (UniqueName: \"kubernetes.io/projected/5b590e5f-8fe9-4121-805c-8ed9ab5d8686-kube-api-access-tmpcx\") pod \"openstack-operator-controller-init-7f8db498b4-xs9l4\" (UID: \"5b590e5f-8fe9-4121-805c-8ed9ab5d8686\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4" Feb 16 21:33:40.481373 master-0 kubenswrapper[38936]: I0216 21:33:40.481145 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4" Feb 16 21:33:41.063539 master-0 kubenswrapper[38936]: I0216 21:33:41.063476 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4"] Feb 16 21:33:41.359740 master-0 kubenswrapper[38936]: I0216 21:33:41.358238 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4" event={"ID":"5b590e5f-8fe9-4121-805c-8ed9ab5d8686","Type":"ContainerStarted","Data":"add31afe6b5e4dec1c23e7670955186818094cf626756235598bd8d28a063a5f"} Feb 16 21:33:47.456597 master-0 kubenswrapper[38936]: I0216 21:33:47.456516 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4" event={"ID":"5b590e5f-8fe9-4121-805c-8ed9ab5d8686","Type":"ContainerStarted","Data":"27d2595021ed1c3fc21d7714e5ac944b88b33476f796cb5a91d6ac66978bb090"} Feb 16 21:33:47.457152 master-0 kubenswrapper[38936]: I0216 21:33:47.456725 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4" Feb 16 21:33:47.511339 master-0 kubenswrapper[38936]: I0216 21:33:47.511248 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4" podStartSLOduration=2.24023772 podStartE2EDuration="7.51122736s" podCreationTimestamp="2026-02-16 21:33:40 +0000 UTC" firstStartedPulling="2026-02-16 21:33:41.071361339 +0000 UTC m=+651.423364701" lastFinishedPulling="2026-02-16 21:33:46.342350979 +0000 UTC m=+656.694354341" observedRunningTime="2026-02-16 21:33:47.492864066 +0000 UTC m=+657.844867458" watchObservedRunningTime="2026-02-16 21:33:47.51122736 +0000 UTC m=+657.863230722" Feb 16 21:34:00.484475 master-0 kubenswrapper[38936]: I0216 21:34:00.484394 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4" Feb 16 21:34:21.019673 master-0 kubenswrapper[38936]: I0216 21:34:21.018720 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr"] Feb 16 21:34:21.020258 master-0 kubenswrapper[38936]: I0216 21:34:21.020057 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr" Feb 16 21:34:21.030955 master-0 kubenswrapper[38936]: I0216 21:34:21.029926 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb"] Feb 16 21:34:21.031221 master-0 kubenswrapper[38936]: I0216 21:34:21.031210 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb" Feb 16 21:34:21.057090 master-0 kubenswrapper[38936]: I0216 21:34:21.056976 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr"] Feb 16 21:34:21.062728 master-0 kubenswrapper[38936]: I0216 21:34:21.061167 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dcvp\" (UniqueName: \"kubernetes.io/projected/18add596-a9c3-4e94-ad01-39d363025d52-kube-api-access-5dcvp\") pod \"barbican-operator-controller-manager-868647ff47-cl9fr\" (UID: \"18add596-a9c3-4e94-ad01-39d363025d52\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr" Feb 16 21:34:21.062728 master-0 kubenswrapper[38936]: I0216 21:34:21.061403 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl5b5\" (UniqueName: \"kubernetes.io/projected/891bb5c8-0e45-4e99-8384-6c24700f5251-kube-api-access-vl5b5\") pod \"cinder-operator-controller-manager-5d946d989d-vcvgb\" (UID: \"891bb5c8-0e45-4e99-8384-6c24700f5251\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb" Feb 16 21:34:21.063905 master-0 kubenswrapper[38936]: I0216 21:34:21.063677 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb"] Feb 16 21:34:21.142448 master-0 kubenswrapper[38936]: I0216 21:34:21.134174 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk"] Feb 16 21:34:21.142448 master-0 kubenswrapper[38936]: I0216 21:34:21.135964 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk" Feb 16 21:34:21.170218 master-0 kubenswrapper[38936]: I0216 21:34:21.169272 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dcvp\" (UniqueName: \"kubernetes.io/projected/18add596-a9c3-4e94-ad01-39d363025d52-kube-api-access-5dcvp\") pod \"barbican-operator-controller-manager-868647ff47-cl9fr\" (UID: \"18add596-a9c3-4e94-ad01-39d363025d52\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr" Feb 16 21:34:21.170218 master-0 kubenswrapper[38936]: I0216 21:34:21.169395 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dh9m\" (UniqueName: \"kubernetes.io/projected/357c2b7e-6999-44d6-b5dc-57fe20c0ae75-kube-api-access-6dh9m\") pod \"designate-operator-controller-manager-6d8bf5c495-7q6jk\" (UID: \"357c2b7e-6999-44d6-b5dc-57fe20c0ae75\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk" Feb 16 21:34:21.170218 master-0 kubenswrapper[38936]: I0216 21:34:21.169477 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl5b5\" (UniqueName: \"kubernetes.io/projected/891bb5c8-0e45-4e99-8384-6c24700f5251-kube-api-access-vl5b5\") pod \"cinder-operator-controller-manager-5d946d989d-vcvgb\" (UID: \"891bb5c8-0e45-4e99-8384-6c24700f5251\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb" Feb 16 21:34:21.176853 master-0 kubenswrapper[38936]: I0216 21:34:21.176781 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk"] Feb 16 21:34:21.193424 master-0 kubenswrapper[38936]: I0216 21:34:21.193354 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qbf42"] Feb 16 21:34:21.194828 master-0 kubenswrapper[38936]: I0216 21:34:21.194794 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qbf42" Feb 16 21:34:21.199187 master-0 kubenswrapper[38936]: I0216 21:34:21.199140 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl5b5\" (UniqueName: \"kubernetes.io/projected/891bb5c8-0e45-4e99-8384-6c24700f5251-kube-api-access-vl5b5\") pod \"cinder-operator-controller-manager-5d946d989d-vcvgb\" (UID: \"891bb5c8-0e45-4e99-8384-6c24700f5251\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb" Feb 16 21:34:21.221850 master-0 kubenswrapper[38936]: I0216 21:34:21.221114 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dcvp\" (UniqueName: \"kubernetes.io/projected/18add596-a9c3-4e94-ad01-39d363025d52-kube-api-access-5dcvp\") pod \"barbican-operator-controller-manager-868647ff47-cl9fr\" (UID: \"18add596-a9c3-4e94-ad01-39d363025d52\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr" Feb 16 21:34:21.222061 master-0 kubenswrapper[38936]: I0216 21:34:21.221894 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qbf42"] Feb 16 21:34:21.231537 master-0 kubenswrapper[38936]: I0216 21:34:21.229193 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x"] Feb 16 21:34:21.231868 master-0 kubenswrapper[38936]: I0216 21:34:21.231772 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x" Feb 16 21:34:21.252164 master-0 kubenswrapper[38936]: I0216 21:34:21.252106 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x"] Feb 16 21:34:21.276355 master-0 kubenswrapper[38936]: I0216 21:34:21.273137 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dh9m\" (UniqueName: \"kubernetes.io/projected/357c2b7e-6999-44d6-b5dc-57fe20c0ae75-kube-api-access-6dh9m\") pod \"designate-operator-controller-manager-6d8bf5c495-7q6jk\" (UID: \"357c2b7e-6999-44d6-b5dc-57fe20c0ae75\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk" Feb 16 21:34:21.276355 master-0 kubenswrapper[38936]: I0216 21:34:21.273258 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6k5l\" (UniqueName: \"kubernetes.io/projected/12bd49dd-67b9-467b-859b-1388b9882681-kube-api-access-w6k5l\") pod \"heat-operator-controller-manager-69f49c598c-jgb9x\" (UID: \"12bd49dd-67b9-467b-859b-1388b9882681\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x" Feb 16 21:34:21.276355 master-0 kubenswrapper[38936]: I0216 21:34:21.273326 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k2kl\" (UniqueName: \"kubernetes.io/projected/32cafbe9-c7a9-4737-9b4b-3d5e46779d3d-kube-api-access-7k2kl\") pod \"glance-operator-controller-manager-77987464f4-qbf42\" (UID: \"32cafbe9-c7a9-4737-9b4b-3d5e46779d3d\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qbf42" Feb 16 21:34:21.276697 master-0 kubenswrapper[38936]: I0216 21:34:21.276483 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws"] Feb 16 21:34:21.289120 master-0 kubenswrapper[38936]: I0216 21:34:21.279308 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws" Feb 16 21:34:21.312704 master-0 kubenswrapper[38936]: I0216 21:34:21.311698 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dh9m\" (UniqueName: \"kubernetes.io/projected/357c2b7e-6999-44d6-b5dc-57fe20c0ae75-kube-api-access-6dh9m\") pod \"designate-operator-controller-manager-6d8bf5c495-7q6jk\" (UID: \"357c2b7e-6999-44d6-b5dc-57fe20c0ae75\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk" Feb 16 21:34:21.313707 master-0 kubenswrapper[38936]: I0216 21:34:21.313256 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz"] Feb 16 21:34:21.315676 master-0 kubenswrapper[38936]: I0216 21:34:21.315633 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:21.323608 master-0 kubenswrapper[38936]: I0216 21:34:21.319713 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 21:34:21.338726 master-0 kubenswrapper[38936]: I0216 21:34:21.338668 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws"] Feb 16 21:34:21.364859 master-0 kubenswrapper[38936]: I0216 21:34:21.364800 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz"] Feb 16 21:34:21.375211 master-0 kubenswrapper[38936]: I0216 21:34:21.375132 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ln9x\" (UniqueName: \"kubernetes.io/projected/911d5a9a-3dfb-4345-a53a-901075360f91-kube-api-access-8ln9x\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:21.375483 master-0 kubenswrapper[38936]: I0216 21:34:21.375214 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6k5l\" (UniqueName: \"kubernetes.io/projected/12bd49dd-67b9-467b-859b-1388b9882681-kube-api-access-w6k5l\") pod \"heat-operator-controller-manager-69f49c598c-jgb9x\" (UID: \"12bd49dd-67b9-467b-859b-1388b9882681\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x" Feb 16 21:34:21.375483 master-0 kubenswrapper[38936]: I0216 21:34:21.375412 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k2kl\" (UniqueName: \"kubernetes.io/projected/32cafbe9-c7a9-4737-9b4b-3d5e46779d3d-kube-api-access-7k2kl\") pod \"glance-operator-controller-manager-77987464f4-qbf42\" (UID: \"32cafbe9-c7a9-4737-9b4b-3d5e46779d3d\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qbf42" Feb 16 21:34:21.375630 master-0 kubenswrapper[38936]: I0216 21:34:21.375540 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snn2p\" (UniqueName: \"kubernetes.io/projected/57d3af01-3b13-4d0d-aebf-43d07aea3461-kube-api-access-snn2p\") pod \"horizon-operator-controller-manager-5b9b8895d5-5vhws\" (UID: \"57d3af01-3b13-4d0d-aebf-43d07aea3461\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws" Feb 16 21:34:21.375713 master-0 kubenswrapper[38936]: I0216 21:34:21.375625 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:21.397667 master-0 kubenswrapper[38936]: I0216 21:34:21.393227 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr" Feb 16 21:34:21.408617 master-0 kubenswrapper[38936]: I0216 21:34:21.406150 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq"] Feb 16 21:34:21.408617 master-0 kubenswrapper[38936]: I0216 21:34:21.407457 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq" Feb 16 21:34:21.414236 master-0 kubenswrapper[38936]: I0216 21:34:21.414186 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k2kl\" (UniqueName: \"kubernetes.io/projected/32cafbe9-c7a9-4737-9b4b-3d5e46779d3d-kube-api-access-7k2kl\") pod \"glance-operator-controller-manager-77987464f4-qbf42\" (UID: \"32cafbe9-c7a9-4737-9b4b-3d5e46779d3d\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qbf42" Feb 16 21:34:21.417787 master-0 kubenswrapper[38936]: I0216 21:34:21.417746 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6k5l\" (UniqueName: \"kubernetes.io/projected/12bd49dd-67b9-467b-859b-1388b9882681-kube-api-access-w6k5l\") pod \"heat-operator-controller-manager-69f49c598c-jgb9x\" (UID: \"12bd49dd-67b9-467b-859b-1388b9882681\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x" Feb 16 21:34:21.429625 master-0 kubenswrapper[38936]: I0216 21:34:21.428579 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb" Feb 16 21:34:21.451453 master-0 kubenswrapper[38936]: I0216 21:34:21.445281 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6"] Feb 16 21:34:21.451453 master-0 kubenswrapper[38936]: I0216 21:34:21.447229 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6" Feb 16 21:34:21.475111 master-0 kubenswrapper[38936]: I0216 21:34:21.474967 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq"] Feb 16 21:34:21.477685 master-0 kubenswrapper[38936]: I0216 21:34:21.477550 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:21.477685 master-0 kubenswrapper[38936]: I0216 21:34:21.477668 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ln9x\" (UniqueName: \"kubernetes.io/projected/911d5a9a-3dfb-4345-a53a-901075360f91-kube-api-access-8ln9x\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:21.478232 master-0 kubenswrapper[38936]: E0216 21:34:21.478172 38936 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:21.478307 master-0 kubenswrapper[38936]: E0216 21:34:21.478290 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert podName:911d5a9a-3dfb-4345-a53a-901075360f91 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:21.978263152 +0000 UTC m=+692.330266514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert") pod "infra-operator-controller-manager-5f879c76b6-ns6pz" (UID: "911d5a9a-3dfb-4345-a53a-901075360f91") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:21.491213 master-0 kubenswrapper[38936]: I0216 21:34:21.491145 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2jj6\" (UniqueName: \"kubernetes.io/projected/cac792f3-e1bd-496f-9fa0-709907e97b0b-kube-api-access-f2jj6\") pod \"keystone-operator-controller-manager-b4d948c87-wrhn6\" (UID: \"cac792f3-e1bd-496f-9fa0-709907e97b0b\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6" Feb 16 21:34:21.491393 master-0 kubenswrapper[38936]: I0216 21:34:21.491349 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snn2p\" (UniqueName: \"kubernetes.io/projected/57d3af01-3b13-4d0d-aebf-43d07aea3461-kube-api-access-snn2p\") pod \"horizon-operator-controller-manager-5b9b8895d5-5vhws\" (UID: \"57d3af01-3b13-4d0d-aebf-43d07aea3461\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws" Feb 16 21:34:21.491438 master-0 kubenswrapper[38936]: I0216 21:34:21.491390 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75w7q\" (UniqueName: \"kubernetes.io/projected/b6fbc679-8e3b-48e1-85f0-87f35b7dc9e2-kube-api-access-75w7q\") pod \"ironic-operator-controller-manager-554564d7fc-2bvnq\" (UID: \"b6fbc679-8e3b-48e1-85f0-87f35b7dc9e2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq" Feb 16 21:34:21.525947 master-0 kubenswrapper[38936]: I0216 21:34:21.525139 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ln9x\" (UniqueName: \"kubernetes.io/projected/911d5a9a-3dfb-4345-a53a-901075360f91-kube-api-access-8ln9x\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:21.530237 master-0 kubenswrapper[38936]: I0216 21:34:21.530035 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snn2p\" (UniqueName: \"kubernetes.io/projected/57d3af01-3b13-4d0d-aebf-43d07aea3461-kube-api-access-snn2p\") pod \"horizon-operator-controller-manager-5b9b8895d5-5vhws\" (UID: \"57d3af01-3b13-4d0d-aebf-43d07aea3461\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws" Feb 16 21:34:21.576783 master-0 kubenswrapper[38936]: I0216 21:34:21.574743 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6"] Feb 16 21:34:21.587189 master-0 kubenswrapper[38936]: I0216 21:34:21.586617 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-54t98"] Feb 16 21:34:21.590535 master-0 kubenswrapper[38936]: I0216 21:34:21.588256 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk" Feb 16 21:34:21.603769 master-0 kubenswrapper[38936]: I0216 21:34:21.598879 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2jj6\" (UniqueName: \"kubernetes.io/projected/cac792f3-e1bd-496f-9fa0-709907e97b0b-kube-api-access-f2jj6\") pod \"keystone-operator-controller-manager-b4d948c87-wrhn6\" (UID: \"cac792f3-e1bd-496f-9fa0-709907e97b0b\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6" Feb 16 21:34:21.603769 master-0 kubenswrapper[38936]: I0216 21:34:21.599106 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75w7q\" (UniqueName: \"kubernetes.io/projected/b6fbc679-8e3b-48e1-85f0-87f35b7dc9e2-kube-api-access-75w7q\") pod \"ironic-operator-controller-manager-554564d7fc-2bvnq\" (UID: \"b6fbc679-8e3b-48e1-85f0-87f35b7dc9e2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq" Feb 16 21:34:21.610527 master-0 kubenswrapper[38936]: I0216 21:34:21.609800 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-54t98" Feb 16 21:34:21.652396 master-0 kubenswrapper[38936]: I0216 21:34:21.652343 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75w7q\" (UniqueName: \"kubernetes.io/projected/b6fbc679-8e3b-48e1-85f0-87f35b7dc9e2-kube-api-access-75w7q\") pod \"ironic-operator-controller-manager-554564d7fc-2bvnq\" (UID: \"b6fbc679-8e3b-48e1-85f0-87f35b7dc9e2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq" Feb 16 21:34:21.657447 master-0 kubenswrapper[38936]: I0216 21:34:21.657365 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-54t98"] Feb 16 21:34:21.660367 master-0 kubenswrapper[38936]: I0216 21:34:21.659698 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qbf42" Feb 16 21:34:21.664539 master-0 kubenswrapper[38936]: I0216 21:34:21.664458 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2jj6\" (UniqueName: \"kubernetes.io/projected/cac792f3-e1bd-496f-9fa0-709907e97b0b-kube-api-access-f2jj6\") pod \"keystone-operator-controller-manager-b4d948c87-wrhn6\" (UID: \"cac792f3-e1bd-496f-9fa0-709907e97b0b\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6" Feb 16 21:34:21.681696 master-0 kubenswrapper[38936]: I0216 21:34:21.681555 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x" Feb 16 21:34:21.692823 master-0 kubenswrapper[38936]: I0216 21:34:21.692778 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp"] Feb 16 21:34:21.705405 master-0 kubenswrapper[38936]: I0216 21:34:21.703014 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws" Feb 16 21:34:21.706320 master-0 kubenswrapper[38936]: I0216 21:34:21.706264 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp" Feb 16 21:34:21.707989 master-0 kubenswrapper[38936]: I0216 21:34:21.707971 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr"] Feb 16 21:34:21.709325 master-0 kubenswrapper[38936]: I0216 21:34:21.709309 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr" Feb 16 21:34:21.714544 master-0 kubenswrapper[38936]: I0216 21:34:21.714504 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7tfm\" (UniqueName: \"kubernetes.io/projected/a8bf004e-2095-4e74-943b-1c724e78a4aa-kube-api-access-k7tfm\") pod \"manila-operator-controller-manager-54f6768c69-54t98\" (UID: \"a8bf004e-2095-4e74-943b-1c724e78a4aa\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-54t98" Feb 16 21:34:21.726572 master-0 kubenswrapper[38936]: I0216 21:34:21.717412 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp"] Feb 16 21:34:21.747747 master-0 kubenswrapper[38936]: I0216 21:34:21.747624 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr"] Feb 16 21:34:21.777129 master-0 kubenswrapper[38936]: I0216 21:34:21.776368 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx"] Feb 16 21:34:21.778317 master-0 kubenswrapper[38936]: I0216 21:34:21.778276 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx" Feb 16 21:34:21.824280 master-0 kubenswrapper[38936]: I0216 21:34:21.819048 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq" Feb 16 21:34:21.824280 master-0 kubenswrapper[38936]: I0216 21:34:21.820663 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbq2g\" (UniqueName: \"kubernetes.io/projected/e2287eff-31d9-4737-b884-f369344e2b02-kube-api-access-dbq2g\") pod \"mariadb-operator-controller-manager-6994f66f48-mpvvp\" (UID: \"e2287eff-31d9-4737-b884-f369344e2b02\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp" Feb 16 21:34:21.824280 master-0 kubenswrapper[38936]: I0216 21:34:21.820710 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7tfm\" (UniqueName: \"kubernetes.io/projected/a8bf004e-2095-4e74-943b-1c724e78a4aa-kube-api-access-k7tfm\") pod \"manila-operator-controller-manager-54f6768c69-54t98\" (UID: \"a8bf004e-2095-4e74-943b-1c724e78a4aa\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-54t98" Feb 16 21:34:21.824280 master-0 kubenswrapper[38936]: I0216 21:34:21.820747 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2zx\" (UniqueName: \"kubernetes.io/projected/d7baf5cf-175c-457a-95d4-530fc8679f0d-kube-api-access-ln2zx\") pod \"nova-operator-controller-manager-567668f5cf-xp4kx\" (UID: \"d7baf5cf-175c-457a-95d4-530fc8679f0d\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx" Feb 16 21:34:21.824280 master-0 kubenswrapper[38936]: I0216 21:34:21.820774 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkr6c\" (UniqueName: \"kubernetes.io/projected/721853c8-a888-4e6d-8647-31bf57a0b9cb-kube-api-access-zkr6c\") pod \"neutron-operator-controller-manager-64ddbf8bb-c6nnr\" (UID: \"721853c8-a888-4e6d-8647-31bf57a0b9cb\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr" Feb 16 21:34:21.839284 master-0 kubenswrapper[38936]: I0216 21:34:21.835118 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6" Feb 16 21:34:21.847456 master-0 kubenswrapper[38936]: I0216 21:34:21.847314 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx"] Feb 16 21:34:21.911672 master-0 kubenswrapper[38936]: I0216 21:34:21.890111 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7tfm\" (UniqueName: \"kubernetes.io/projected/a8bf004e-2095-4e74-943b-1c724e78a4aa-kube-api-access-k7tfm\") pod \"manila-operator-controller-manager-54f6768c69-54t98\" (UID: \"a8bf004e-2095-4e74-943b-1c724e78a4aa\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-54t98" Feb 16 21:34:21.990372 master-0 kubenswrapper[38936]: I0216 21:34:21.971265 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbq2g\" (UniqueName: \"kubernetes.io/projected/e2287eff-31d9-4737-b884-f369344e2b02-kube-api-access-dbq2g\") pod \"mariadb-operator-controller-manager-6994f66f48-mpvvp\" (UID: \"e2287eff-31d9-4737-b884-f369344e2b02\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp" Feb 16 21:34:21.990372 master-0 kubenswrapper[38936]: I0216 21:34:21.971343 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln2zx\" (UniqueName: \"kubernetes.io/projected/d7baf5cf-175c-457a-95d4-530fc8679f0d-kube-api-access-ln2zx\") pod \"nova-operator-controller-manager-567668f5cf-xp4kx\" (UID: \"d7baf5cf-175c-457a-95d4-530fc8679f0d\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx" Feb 16 21:34:21.990372 master-0 kubenswrapper[38936]: I0216 21:34:21.971375 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkr6c\" (UniqueName: \"kubernetes.io/projected/721853c8-a888-4e6d-8647-31bf57a0b9cb-kube-api-access-zkr6c\") pod \"neutron-operator-controller-manager-64ddbf8bb-c6nnr\" (UID: \"721853c8-a888-4e6d-8647-31bf57a0b9cb\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr" Feb 16 21:34:21.990372 master-0 kubenswrapper[38936]: I0216 21:34:21.972372 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-54t98" Feb 16 21:34:22.028670 master-0 kubenswrapper[38936]: I0216 21:34:22.011357 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbq2g\" (UniqueName: \"kubernetes.io/projected/e2287eff-31d9-4737-b884-f369344e2b02-kube-api-access-dbq2g\") pod \"mariadb-operator-controller-manager-6994f66f48-mpvvp\" (UID: \"e2287eff-31d9-4737-b884-f369344e2b02\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp" Feb 16 21:34:22.067674 master-0 kubenswrapper[38936]: I0216 21:34:22.038370 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l"] Feb 16 21:34:22.067674 master-0 kubenswrapper[38936]: I0216 21:34:22.039410 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l"] Feb 16 21:34:22.067674 master-0 kubenswrapper[38936]: I0216 21:34:22.039487 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l" Feb 16 21:34:22.083825 master-0 kubenswrapper[38936]: I0216 21:34:22.077750 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkr6c\" (UniqueName: \"kubernetes.io/projected/721853c8-a888-4e6d-8647-31bf57a0b9cb-kube-api-access-zkr6c\") pod \"neutron-operator-controller-manager-64ddbf8bb-c6nnr\" (UID: \"721853c8-a888-4e6d-8647-31bf57a0b9cb\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr" Feb 16 21:34:22.083825 master-0 kubenswrapper[38936]: I0216 21:34:22.078163 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:22.083825 master-0 kubenswrapper[38936]: E0216 21:34:22.078631 38936 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:22.083825 master-0 kubenswrapper[38936]: E0216 21:34:22.078693 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert podName:911d5a9a-3dfb-4345-a53a-901075360f91 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:23.078680116 +0000 UTC m=+693.430683478 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert") pod "infra-operator-controller-manager-5f879c76b6-ns6pz" (UID: "911d5a9a-3dfb-4345-a53a-901075360f91") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:22.124674 master-0 kubenswrapper[38936]: I0216 21:34:22.112378 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln2zx\" (UniqueName: \"kubernetes.io/projected/d7baf5cf-175c-457a-95d4-530fc8679f0d-kube-api-access-ln2zx\") pod \"nova-operator-controller-manager-567668f5cf-xp4kx\" (UID: \"d7baf5cf-175c-457a-95d4-530fc8679f0d\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx" Feb 16 21:34:22.151036 master-0 kubenswrapper[38936]: I0216 21:34:22.144407 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c"] Feb 16 21:34:22.151036 master-0 kubenswrapper[38936]: I0216 21:34:22.144786 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp" Feb 16 21:34:22.151036 master-0 kubenswrapper[38936]: I0216 21:34:22.146320 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:22.151331 master-0 kubenswrapper[38936]: I0216 21:34:22.151285 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 21:34:22.179847 master-0 kubenswrapper[38936]: I0216 21:34:22.179782 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwrzv\" (UniqueName: \"kubernetes.io/projected/5e7149a7-9bde-4512-8b3f-008108c493a4-kube-api-access-rwrzv\") pod \"octavia-operator-controller-manager-69f8888797-fgq6l\" (UID: \"5e7149a7-9bde-4512-8b3f-008108c493a4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l" Feb 16 21:34:22.180754 master-0 kubenswrapper[38936]: I0216 21:34:22.180727 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c"] Feb 16 21:34:22.190160 master-0 kubenswrapper[38936]: I0216 21:34:22.189086 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g"] Feb 16 21:34:22.190568 master-0 kubenswrapper[38936]: I0216 21:34:22.190513 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g" Feb 16 21:34:22.217165 master-0 kubenswrapper[38936]: I0216 21:34:22.217035 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp"] Feb 16 21:34:22.219582 master-0 kubenswrapper[38936]: I0216 21:34:22.219390 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp" Feb 16 21:34:22.240834 master-0 kubenswrapper[38936]: I0216 21:34:22.233764 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g"] Feb 16 21:34:22.277246 master-0 kubenswrapper[38936]: I0216 21:34:22.262889 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp"] Feb 16 21:34:22.277989 master-0 kubenswrapper[38936]: W0216 21:34:22.277942 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18add596_a9c3_4e94_ad01_39d363025d52.slice/crio-ae4e1be382e9988a19bc5cc3921adc0c13234467e24d3d37307b975eb03f160d WatchSource:0}: Error finding container ae4e1be382e9988a19bc5cc3921adc0c13234467e24d3d37307b975eb03f160d: Status 404 returned error can't find the container with id ae4e1be382e9988a19bc5cc3921adc0c13234467e24d3d37307b975eb03f160d Feb 16 21:34:22.281938 master-0 kubenswrapper[38936]: I0216 21:34:22.281880 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gw9s\" (UniqueName: \"kubernetes.io/projected/be794dfc-6441-4acf-b84c-2f0a2ed8d090-kube-api-access-5gw9s\") pod \"ovn-operator-controller-manager-d44cf6b75-f8x8g\" (UID: \"be794dfc-6441-4acf-b84c-2f0a2ed8d090\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g" Feb 16 21:34:22.282025 master-0 kubenswrapper[38936]: I0216 21:34:22.281960 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:22.282061 master-0 kubenswrapper[38936]: I0216 21:34:22.282052 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwrzv\" (UniqueName: \"kubernetes.io/projected/5e7149a7-9bde-4512-8b3f-008108c493a4-kube-api-access-rwrzv\") pod \"octavia-operator-controller-manager-69f8888797-fgq6l\" (UID: \"5e7149a7-9bde-4512-8b3f-008108c493a4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l" Feb 16 21:34:22.282117 master-0 kubenswrapper[38936]: I0216 21:34:22.282096 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h5gl\" (UniqueName: \"kubernetes.io/projected/89a5a16e-f92a-4878-ac85-9f4ca6b13354-kube-api-access-7h5gl\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:22.288897 master-0 kubenswrapper[38936]: I0216 21:34:22.288809 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz"] Feb 16 21:34:22.290749 master-0 kubenswrapper[38936]: I0216 21:34:22.290733 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz" Feb 16 21:34:22.298679 master-0 kubenswrapper[38936]: I0216 21:34:22.298603 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz"] Feb 16 21:34:22.308080 master-0 kubenswrapper[38936]: I0216 21:34:22.307998 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwrzv\" (UniqueName: \"kubernetes.io/projected/5e7149a7-9bde-4512-8b3f-008108c493a4-kube-api-access-rwrzv\") pod \"octavia-operator-controller-manager-69f8888797-fgq6l\" (UID: \"5e7149a7-9bde-4512-8b3f-008108c493a4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l" Feb 16 21:34:22.313999 master-0 kubenswrapper[38936]: I0216 21:34:22.312858 38936 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:34:22.322126 master-0 kubenswrapper[38936]: I0216 21:34:22.320492 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" Feb 16 21:34:22.329260 master-0 kubenswrapper[38936]: I0216 21:34:22.329189 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz"] Feb 16 21:34:22.342733 master-0 kubenswrapper[38936]: I0216 21:34:22.341872 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz"] Feb 16 21:34:22.342923 master-0 kubenswrapper[38936]: I0216 21:34:22.342863 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr" Feb 16 21:34:22.349896 master-0 kubenswrapper[38936]: I0216 21:34:22.349851 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx" Feb 16 21:34:22.350710 master-0 kubenswrapper[38936]: I0216 21:34:22.349998 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-snzb8"] Feb 16 21:34:22.351739 master-0 kubenswrapper[38936]: I0216 21:34:22.351693 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-snzb8" Feb 16 21:34:22.358880 master-0 kubenswrapper[38936]: I0216 21:34:22.358819 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-snzb8"] Feb 16 21:34:22.366031 master-0 kubenswrapper[38936]: I0216 21:34:22.365990 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw"] Feb 16 21:34:22.368102 master-0 kubenswrapper[38936]: I0216 21:34:22.368056 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw" Feb 16 21:34:22.378458 master-0 kubenswrapper[38936]: I0216 21:34:22.378404 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw"] Feb 16 21:34:22.386240 master-0 kubenswrapper[38936]: I0216 21:34:22.384710 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gw9s\" (UniqueName: \"kubernetes.io/projected/be794dfc-6441-4acf-b84c-2f0a2ed8d090-kube-api-access-5gw9s\") pod \"ovn-operator-controller-manager-d44cf6b75-f8x8g\" (UID: \"be794dfc-6441-4acf-b84c-2f0a2ed8d090\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g" Feb 16 21:34:22.386240 master-0 kubenswrapper[38936]: I0216 21:34:22.384798 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:22.386240 master-0 kubenswrapper[38936]: I0216 21:34:22.384920 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn45m\" (UniqueName: \"kubernetes.io/projected/a35ee94a-66a1-404d-981b-4e4426e9929d-kube-api-access-xn45m\") pod \"swift-operator-controller-manager-68f46476f-zt9nz\" (UID: \"a35ee94a-66a1-404d-981b-4e4426e9929d\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz" Feb 16 21:34:22.386240 master-0 kubenswrapper[38936]: I0216 21:34:22.384943 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb4xp\" (UniqueName: \"kubernetes.io/projected/f4fd52b3-5d16-4bac-aa01-cb65615df27d-kube-api-access-gb4xp\") pod \"placement-operator-controller-manager-8497b45c89-mfnnp\" (UID: \"f4fd52b3-5d16-4bac-aa01-cb65615df27d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp" Feb 16 21:34:22.386240 master-0 kubenswrapper[38936]: I0216 21:34:22.384976 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h5gl\" (UniqueName: \"kubernetes.io/projected/89a5a16e-f92a-4878-ac85-9f4ca6b13354-kube-api-access-7h5gl\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:22.386240 master-0 kubenswrapper[38936]: E0216 21:34:22.384986 38936 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:22.386240 master-0 kubenswrapper[38936]: E0216 21:34:22.385063 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert podName:89a5a16e-f92a-4878-ac85-9f4ca6b13354 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:22.885042137 +0000 UTC m=+693.237045499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" (UID: "89a5a16e-f92a-4878-ac85-9f4ca6b13354") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:22.424606 master-0 kubenswrapper[38936]: I0216 21:34:22.424565 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h5gl\" (UniqueName: \"kubernetes.io/projected/89a5a16e-f92a-4878-ac85-9f4ca6b13354-kube-api-access-7h5gl\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:22.425140 master-0 kubenswrapper[38936]: I0216 21:34:22.425088 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gw9s\" (UniqueName: \"kubernetes.io/projected/be794dfc-6441-4acf-b84c-2f0a2ed8d090-kube-api-access-5gw9s\") pod \"ovn-operator-controller-manager-d44cf6b75-f8x8g\" (UID: \"be794dfc-6441-4acf-b84c-2f0a2ed8d090\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g" Feb 16 21:34:22.435049 master-0 kubenswrapper[38936]: I0216 21:34:22.435012 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd"] Feb 16 21:34:22.436817 master-0 kubenswrapper[38936]: I0216 21:34:22.436796 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:22.439805 master-0 kubenswrapper[38936]: I0216 21:34:22.439271 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 21:34:22.452861 master-0 kubenswrapper[38936]: I0216 21:34:22.440198 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 21:34:22.460001 master-0 kubenswrapper[38936]: I0216 21:34:22.459226 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd"] Feb 16 21:34:22.478140 master-0 kubenswrapper[38936]: I0216 21:34:22.478094 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7"] Feb 16 21:34:22.479682 master-0 kubenswrapper[38936]: I0216 21:34:22.479636 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7" Feb 16 21:34:22.480400 master-0 kubenswrapper[38936]: I0216 21:34:22.480352 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7"] Feb 16 21:34:22.498531 master-0 kubenswrapper[38936]: I0216 21:34:22.496326 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn45m\" (UniqueName: \"kubernetes.io/projected/a35ee94a-66a1-404d-981b-4e4426e9929d-kube-api-access-xn45m\") pod \"swift-operator-controller-manager-68f46476f-zt9nz\" (UID: \"a35ee94a-66a1-404d-981b-4e4426e9929d\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz" Feb 16 21:34:22.498531 master-0 kubenswrapper[38936]: I0216 21:34:22.496376 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb4xp\" (UniqueName: \"kubernetes.io/projected/f4fd52b3-5d16-4bac-aa01-cb65615df27d-kube-api-access-gb4xp\") pod \"placement-operator-controller-manager-8497b45c89-mfnnp\" (UID: \"f4fd52b3-5d16-4bac-aa01-cb65615df27d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp" Feb 16 21:34:22.498531 master-0 kubenswrapper[38936]: I0216 21:34:22.496433 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgrl8\" (UniqueName: \"kubernetes.io/projected/77211113-7f6f-40ab-aaa9-71a3073d82c8-kube-api-access-wgrl8\") pod \"test-operator-controller-manager-7866795846-snzb8\" (UID: \"77211113-7f6f-40ab-aaa9-71a3073d82c8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-snzb8" Feb 16 21:34:22.498531 master-0 kubenswrapper[38936]: I0216 21:34:22.496465 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm9v8\" (UniqueName: \"kubernetes.io/projected/f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6-kube-api-access-fm9v8\") pod \"telemetry-operator-controller-manager-7f45b4ff68-zrssz\" (UID: \"f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" Feb 16 21:34:22.498531 master-0 kubenswrapper[38936]: I0216 21:34:22.496533 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcl6m\" (UniqueName: \"kubernetes.io/projected/a17c0204-8569-4821-88e4-c4fad31fdf6f-kube-api-access-jcl6m\") pod \"watcher-operator-controller-manager-5db88f68c-79sbw\" (UID: \"a17c0204-8569-4821-88e4-c4fad31fdf6f\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw" Feb 16 21:34:22.526313 master-0 kubenswrapper[38936]: I0216 21:34:22.526233 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr"] Feb 16 21:34:22.548109 master-0 kubenswrapper[38936]: I0216 21:34:22.543895 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb4xp\" (UniqueName: \"kubernetes.io/projected/f4fd52b3-5d16-4bac-aa01-cb65615df27d-kube-api-access-gb4xp\") pod \"placement-operator-controller-manager-8497b45c89-mfnnp\" (UID: \"f4fd52b3-5d16-4bac-aa01-cb65615df27d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp" Feb 16 21:34:22.548109 master-0 kubenswrapper[38936]: I0216 21:34:22.544006 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn45m\" (UniqueName: \"kubernetes.io/projected/a35ee94a-66a1-404d-981b-4e4426e9929d-kube-api-access-xn45m\") pod \"swift-operator-controller-manager-68f46476f-zt9nz\" (UID: \"a35ee94a-66a1-404d-981b-4e4426e9929d\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz" Feb 16 21:34:22.554057 master-0 kubenswrapper[38936]: I0216 21:34:22.554009 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb"] Feb 16 21:34:22.594432 master-0 kubenswrapper[38936]: I0216 21:34:22.594378 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l" Feb 16 21:34:22.597858 master-0 kubenswrapper[38936]: I0216 21:34:22.597808 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v26pw\" (UniqueName: \"kubernetes.io/projected/07f73506-a58b-4c0f-af8e-319e69827880-kube-api-access-v26pw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hdlb7\" (UID: \"07f73506-a58b-4c0f-af8e-319e69827880\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7" Feb 16 21:34:22.597926 master-0 kubenswrapper[38936]: I0216 21:34:22.597857 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:22.597926 master-0 kubenswrapper[38936]: I0216 21:34:22.597886 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcl6m\" (UniqueName: \"kubernetes.io/projected/a17c0204-8569-4821-88e4-c4fad31fdf6f-kube-api-access-jcl6m\") pod \"watcher-operator-controller-manager-5db88f68c-79sbw\" (UID: \"a17c0204-8569-4821-88e4-c4fad31fdf6f\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw" Feb 16 21:34:22.598220 master-0 kubenswrapper[38936]: I0216 21:34:22.598165 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z4k5\" (UniqueName: \"kubernetes.io/projected/ed428069-b441-461f-8a06-ce8958277227-kube-api-access-8z4k5\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:22.598492 master-0 kubenswrapper[38936]: I0216 21:34:22.598450 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgrl8\" (UniqueName: \"kubernetes.io/projected/77211113-7f6f-40ab-aaa9-71a3073d82c8-kube-api-access-wgrl8\") pod \"test-operator-controller-manager-7866795846-snzb8\" (UID: \"77211113-7f6f-40ab-aaa9-71a3073d82c8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-snzb8" Feb 16 21:34:22.598553 master-0 kubenswrapper[38936]: I0216 21:34:22.598528 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:22.598681 master-0 kubenswrapper[38936]: I0216 21:34:22.598564 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm9v8\" (UniqueName: \"kubernetes.io/projected/f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6-kube-api-access-fm9v8\") pod \"telemetry-operator-controller-manager-7f45b4ff68-zrssz\" (UID: \"f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" Feb 16 21:34:22.628872 master-0 kubenswrapper[38936]: I0216 21:34:22.628790 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgrl8\" (UniqueName: \"kubernetes.io/projected/77211113-7f6f-40ab-aaa9-71a3073d82c8-kube-api-access-wgrl8\") pod \"test-operator-controller-manager-7866795846-snzb8\" (UID: \"77211113-7f6f-40ab-aaa9-71a3073d82c8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-snzb8" Feb 16 21:34:22.629598 master-0 kubenswrapper[38936]: I0216 21:34:22.629323 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcl6m\" (UniqueName: \"kubernetes.io/projected/a17c0204-8569-4821-88e4-c4fad31fdf6f-kube-api-access-jcl6m\") pod \"watcher-operator-controller-manager-5db88f68c-79sbw\" (UID: \"a17c0204-8569-4821-88e4-c4fad31fdf6f\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw" Feb 16 21:34:22.637054 master-0 kubenswrapper[38936]: I0216 21:34:22.636855 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm9v8\" (UniqueName: \"kubernetes.io/projected/f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6-kube-api-access-fm9v8\") pod \"telemetry-operator-controller-manager-7f45b4ff68-zrssz\" (UID: \"f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" Feb 16 21:34:22.692405 master-0 kubenswrapper[38936]: I0216 21:34:22.692339 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g" Feb 16 21:34:22.700541 master-0 kubenswrapper[38936]: I0216 21:34:22.700474 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z4k5\" (UniqueName: \"kubernetes.io/projected/ed428069-b441-461f-8a06-ce8958277227-kube-api-access-8z4k5\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:22.700717 master-0 kubenswrapper[38936]: I0216 21:34:22.700594 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:22.701077 master-0 kubenswrapper[38936]: I0216 21:34:22.701007 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v26pw\" (UniqueName: \"kubernetes.io/projected/07f73506-a58b-4c0f-af8e-319e69827880-kube-api-access-v26pw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hdlb7\" (UID: \"07f73506-a58b-4c0f-af8e-319e69827880\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7" Feb 16 21:34:22.701077 master-0 kubenswrapper[38936]: I0216 21:34:22.701057 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:22.702729 master-0 kubenswrapper[38936]: E0216 21:34:22.701307 38936 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:34:22.702729 master-0 kubenswrapper[38936]: E0216 21:34:22.701363 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:23.20134627 +0000 UTC m=+693.553349632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "metrics-server-cert" not found Feb 16 21:34:22.702729 master-0 kubenswrapper[38936]: E0216 21:34:22.701514 38936 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:34:22.702729 master-0 kubenswrapper[38936]: E0216 21:34:22.701601 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:23.201579617 +0000 UTC m=+693.553582979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "webhook-server-cert" not found Feb 16 21:34:22.716061 master-0 kubenswrapper[38936]: I0216 21:34:22.715569 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp" Feb 16 21:34:22.734915 master-0 kubenswrapper[38936]: I0216 21:34:22.729047 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v26pw\" (UniqueName: \"kubernetes.io/projected/07f73506-a58b-4c0f-af8e-319e69827880-kube-api-access-v26pw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hdlb7\" (UID: \"07f73506-a58b-4c0f-af8e-319e69827880\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7" Feb 16 21:34:22.734915 master-0 kubenswrapper[38936]: I0216 21:34:22.734667 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z4k5\" (UniqueName: \"kubernetes.io/projected/ed428069-b441-461f-8a06-ce8958277227-kube-api-access-8z4k5\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:22.743349 master-0 kubenswrapper[38936]: I0216 21:34:22.743288 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz" Feb 16 21:34:22.759208 master-0 kubenswrapper[38936]: I0216 21:34:22.759136 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk"] Feb 16 21:34:22.768099 master-0 kubenswrapper[38936]: I0216 21:34:22.768063 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" Feb 16 21:34:22.791961 master-0 kubenswrapper[38936]: I0216 21:34:22.791908 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-snzb8" Feb 16 21:34:22.811883 master-0 kubenswrapper[38936]: I0216 21:34:22.811843 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw" Feb 16 21:34:22.839965 master-0 kubenswrapper[38936]: I0216 21:34:22.839559 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7" Feb 16 21:34:22.907470 master-0 kubenswrapper[38936]: I0216 21:34:22.906021 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:22.908478 master-0 kubenswrapper[38936]: E0216 21:34:22.908046 38936 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:22.908478 master-0 kubenswrapper[38936]: E0216 21:34:22.908351 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert podName:89a5a16e-f92a-4878-ac85-9f4ca6b13354 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:23.908321993 +0000 UTC m=+694.260325355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" (UID: "89a5a16e-f92a-4878-ac85-9f4ca6b13354") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:22.952209 master-0 kubenswrapper[38936]: I0216 21:34:22.950317 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb" event={"ID":"891bb5c8-0e45-4e99-8384-6c24700f5251","Type":"ContainerStarted","Data":"860fefa7a5234be60d8efddf2058f940cc842a9d9b4a2a7646153f6cdaea5312"} Feb 16 21:34:22.975042 master-0 kubenswrapper[38936]: I0216 21:34:22.974918 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk" event={"ID":"357c2b7e-6999-44d6-b5dc-57fe20c0ae75","Type":"ContainerStarted","Data":"863d5a2eedf622b1a83198173cb031698b268b029ca00f171f5df4e4d09e144a"} Feb 16 21:34:22.979938 master-0 kubenswrapper[38936]: I0216 21:34:22.979867 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr" event={"ID":"18add596-a9c3-4e94-ad01-39d363025d52","Type":"ContainerStarted","Data":"ae4e1be382e9988a19bc5cc3921adc0c13234467e24d3d37307b975eb03f160d"} Feb 16 21:34:23.115434 master-0 kubenswrapper[38936]: I0216 21:34:23.113774 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:23.115434 master-0 kubenswrapper[38936]: E0216 21:34:23.113994 38936 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:23.115434 master-0 kubenswrapper[38936]: E0216 21:34:23.114048 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert podName:911d5a9a-3dfb-4345-a53a-901075360f91 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:25.114031511 +0000 UTC m=+695.466034873 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert") pod "infra-operator-controller-manager-5f879c76b6-ns6pz" (UID: "911d5a9a-3dfb-4345-a53a-901075360f91") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:23.216055 master-0 kubenswrapper[38936]: I0216 21:34:23.215518 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:23.216055 master-0 kubenswrapper[38936]: I0216 21:34:23.215702 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:23.216055 master-0 kubenswrapper[38936]: E0216 21:34:23.215873 38936 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:34:23.216055 master-0 kubenswrapper[38936]: E0216 21:34:23.215923 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:24.215908148 +0000 UTC m=+694.567911510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "webhook-server-cert" not found Feb 16 21:34:23.216318 master-0 kubenswrapper[38936]: E0216 21:34:23.216247 38936 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:34:23.218093 master-0 kubenswrapper[38936]: E0216 21:34:23.216364 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:24.216340339 +0000 UTC m=+694.568343731 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "metrics-server-cert" not found Feb 16 21:34:23.452835 master-0 kubenswrapper[38936]: I0216 21:34:23.452765 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq"] Feb 16 21:34:23.471822 master-0 kubenswrapper[38936]: W0216 21:34:23.471763 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12bd49dd_67b9_467b_859b_1388b9882681.slice/crio-d13696b504d88c584cca2e1b2a3d8d9503028c245d2a88b7da1ae979bc6a8d64 WatchSource:0}: Error finding container d13696b504d88c584cca2e1b2a3d8d9503028c245d2a88b7da1ae979bc6a8d64: Status 404 returned error can't find the container with id d13696b504d88c584cca2e1b2a3d8d9503028c245d2a88b7da1ae979bc6a8d64 Feb 16 21:34:23.474087 master-0 kubenswrapper[38936]: I0216 21:34:23.473276 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws"] Feb 16 21:34:23.487279 master-0 kubenswrapper[38936]: I0216 21:34:23.486903 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x"] Feb 16 21:34:23.498069 master-0 kubenswrapper[38936]: I0216 21:34:23.497990 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qbf42"] Feb 16 21:34:23.684426 master-0 kubenswrapper[38936]: I0216 21:34:23.684161 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-54t98"] Feb 16 21:34:23.694634 master-0 kubenswrapper[38936]: I0216 21:34:23.692078 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp"] Feb 16 21:34:23.704956 master-0 kubenswrapper[38936]: I0216 21:34:23.704896 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6"] Feb 16 21:34:23.707172 master-0 kubenswrapper[38936]: W0216 21:34:23.707103 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8bf004e_2095_4e74_943b_1c724e78a4aa.slice/crio-e963ebf65fcf9047e58ee2dfe906176691812d4ce0ba55cfed0818925e1c864a WatchSource:0}: Error finding container e963ebf65fcf9047e58ee2dfe906176691812d4ce0ba55cfed0818925e1c864a: Status 404 returned error can't find the container with id e963ebf65fcf9047e58ee2dfe906176691812d4ce0ba55cfed0818925e1c864a Feb 16 21:34:23.725958 master-0 kubenswrapper[38936]: W0216 21:34:23.725850 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2287eff_31d9_4737_b884_f369344e2b02.slice/crio-ca198845bbe782d84d8753842c0467766161825c59b04932b7d61372c5ea5517 WatchSource:0}: Error finding container ca198845bbe782d84d8753842c0467766161825c59b04932b7d61372c5ea5517: Status 404 returned error can't find the container with id ca198845bbe782d84d8753842c0467766161825c59b04932b7d61372c5ea5517 Feb 16 21:34:23.741466 master-0 kubenswrapper[38936]: W0216 21:34:23.741392 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod721853c8_a888_4e6d_8647_31bf57a0b9cb.slice/crio-6928d96be753bd4f007266d00e345dc55fc4f88b61265cd9de681d4f3b6d5293 WatchSource:0}: Error finding container 6928d96be753bd4f007266d00e345dc55fc4f88b61265cd9de681d4f3b6d5293: Status 404 returned error can't find the container with id 6928d96be753bd4f007266d00e345dc55fc4f88b61265cd9de681d4f3b6d5293 Feb 16 21:34:23.745409 master-0 kubenswrapper[38936]: I0216 21:34:23.745343 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr"] Feb 16 21:34:23.944769 master-0 kubenswrapper[38936]: I0216 21:34:23.940663 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:23.944769 master-0 kubenswrapper[38936]: E0216 21:34:23.940988 38936 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:23.944769 master-0 kubenswrapper[38936]: E0216 21:34:23.941065 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert podName:89a5a16e-f92a-4878-ac85-9f4ca6b13354 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:25.941040386 +0000 UTC m=+696.293043748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" (UID: "89a5a16e-f92a-4878-ac85-9f4ca6b13354") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:23.993950 master-0 kubenswrapper[38936]: I0216 21:34:23.993429 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x" event={"ID":"12bd49dd-67b9-467b-859b-1388b9882681","Type":"ContainerStarted","Data":"d13696b504d88c584cca2e1b2a3d8d9503028c245d2a88b7da1ae979bc6a8d64"} Feb 16 21:34:23.996002 master-0 kubenswrapper[38936]: I0216 21:34:23.995930 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qbf42" event={"ID":"32cafbe9-c7a9-4737-9b4b-3d5e46779d3d","Type":"ContainerStarted","Data":"b9220a19b95e9eb9de705430dbeb01bbcd910589be472d50de46fabadf75ab8f"} Feb 16 21:34:23.998579 master-0 kubenswrapper[38936]: I0216 21:34:23.998509 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6" event={"ID":"cac792f3-e1bd-496f-9fa0-709907e97b0b","Type":"ContainerStarted","Data":"085167fcc273a078dccb3049f39f809da62c8bf7d90c7d3853d094a025e23255"} Feb 16 21:34:24.000030 master-0 kubenswrapper[38936]: I0216 21:34:23.999971 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq" event={"ID":"b6fbc679-8e3b-48e1-85f0-87f35b7dc9e2","Type":"ContainerStarted","Data":"c4c16995e6d5ac56c0081444c42a9e61389e9033f91b5aa3e34d06ae574f62fa"} Feb 16 21:34:24.004919 master-0 kubenswrapper[38936]: I0216 21:34:24.004094 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp" event={"ID":"e2287eff-31d9-4737-b884-f369344e2b02","Type":"ContainerStarted","Data":"ca198845bbe782d84d8753842c0467766161825c59b04932b7d61372c5ea5517"} Feb 16 21:34:24.006414 master-0 kubenswrapper[38936]: I0216 21:34:24.006272 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-54t98" event={"ID":"a8bf004e-2095-4e74-943b-1c724e78a4aa","Type":"ContainerStarted","Data":"e963ebf65fcf9047e58ee2dfe906176691812d4ce0ba55cfed0818925e1c864a"} Feb 16 21:34:24.016608 master-0 kubenswrapper[38936]: I0216 21:34:24.016271 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr" event={"ID":"721853c8-a888-4e6d-8647-31bf57a0b9cb","Type":"ContainerStarted","Data":"6928d96be753bd4f007266d00e345dc55fc4f88b61265cd9de681d4f3b6d5293"} Feb 16 21:34:24.024019 master-0 kubenswrapper[38936]: I0216 21:34:24.023916 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws" event={"ID":"57d3af01-3b13-4d0d-aebf-43d07aea3461","Type":"ContainerStarted","Data":"fb85975af68817580045065410da031d013e7c12df57fcc0b93e902bddf97dc5"} Feb 16 21:34:24.058299 master-0 kubenswrapper[38936]: I0216 21:34:24.058184 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp"] Feb 16 21:34:24.061157 master-0 kubenswrapper[38936]: W0216 21:34:24.060858 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e7149a7_9bde_4512_8b3f_008108c493a4.slice/crio-b04f4077f5a927b9a2420d8a68506a5822ec7ca839f4099665f7fb4357ce777f WatchSource:0}: Error finding container b04f4077f5a927b9a2420d8a68506a5822ec7ca839f4099665f7fb4357ce777f: Status 404 returned error can't find the container with id b04f4077f5a927b9a2420d8a68506a5822ec7ca839f4099665f7fb4357ce777f Feb 16 21:34:24.068022 master-0 kubenswrapper[38936]: I0216 21:34:24.067759 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l"] Feb 16 21:34:24.077957 master-0 kubenswrapper[38936]: W0216 21:34:24.077867 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7baf5cf_175c_457a_95d4_530fc8679f0d.slice/crio-1cad494dcad17215478e186670a8e8243723da0e72a39013d23d329a4e5fd356 WatchSource:0}: Error finding container 1cad494dcad17215478e186670a8e8243723da0e72a39013d23d329a4e5fd356: Status 404 returned error can't find the container with id 1cad494dcad17215478e186670a8e8243723da0e72a39013d23d329a4e5fd356 Feb 16 21:34:24.080370 master-0 kubenswrapper[38936]: W0216 21:34:24.080248 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe794dfc_6441_4acf_b84c_2f0a2ed8d090.slice/crio-5b3978be18c669d64dfb652bea0689d1a9a3bdd776e01781f794b5585e56f1e8 WatchSource:0}: Error finding container 5b3978be18c669d64dfb652bea0689d1a9a3bdd776e01781f794b5585e56f1e8: Status 404 returned error can't find the container with id 5b3978be18c669d64dfb652bea0689d1a9a3bdd776e01781f794b5585e56f1e8 Feb 16 21:34:24.083068 master-0 kubenswrapper[38936]: I0216 21:34:24.083022 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g"] Feb 16 21:34:24.106231 master-0 kubenswrapper[38936]: I0216 21:34:24.106046 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx"] Feb 16 21:34:24.248175 master-0 kubenswrapper[38936]: I0216 21:34:24.247884 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:24.248929 master-0 kubenswrapper[38936]: I0216 21:34:24.248276 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:24.248929 master-0 kubenswrapper[38936]: E0216 21:34:24.248489 38936 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:34:24.248929 master-0 kubenswrapper[38936]: E0216 21:34:24.248589 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:26.248567538 +0000 UTC m=+696.600570900 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "metrics-server-cert" not found Feb 16 21:34:24.248929 master-0 kubenswrapper[38936]: E0216 21:34:24.248677 38936 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:34:24.248929 master-0 kubenswrapper[38936]: E0216 21:34:24.248872 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:26.248833726 +0000 UTC m=+696.600837088 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "webhook-server-cert" not found Feb 16 21:34:24.398536 master-0 kubenswrapper[38936]: I0216 21:34:24.398447 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw"] Feb 16 21:34:24.417578 master-0 kubenswrapper[38936]: W0216 21:34:24.417530 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda17c0204_8569_4821_88e4_c4fad31fdf6f.slice/crio-248f460c146c4d5d75a1663c899fa58a947361867c8ef2f48caf4c40d198dbda WatchSource:0}: Error finding container 248f460c146c4d5d75a1663c899fa58a947361867c8ef2f48caf4c40d198dbda: Status 404 returned error can't find the container with id 248f460c146c4d5d75a1663c899fa58a947361867c8ef2f48caf4c40d198dbda Feb 16 21:34:24.418512 master-0 kubenswrapper[38936]: I0216 21:34:24.418437 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz"] Feb 16 21:34:24.464702 master-0 kubenswrapper[38936]: W0216 21:34:24.463958 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda35ee94a_66a1_404d_981b_4e4426e9929d.slice/crio-9ba8959e397043bdd35335357e115379bd79f38609bd12741276931bfd09d533 WatchSource:0}: Error finding container 9ba8959e397043bdd35335357e115379bd79f38609bd12741276931bfd09d533: Status 404 returned error can't find the container with id 9ba8959e397043bdd35335357e115379bd79f38609bd12741276931bfd09d533 Feb 16 21:34:24.470204 master-0 kubenswrapper[38936]: I0216 21:34:24.470127 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz"] Feb 16 21:34:24.471519 master-0 kubenswrapper[38936]: W0216 21:34:24.471470 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77211113_7f6f_40ab_aaa9_71a3073d82c8.slice/crio-b2950710935239045d4c07072223f4af44066d92147f3c0dc13eab7ce8fdfca5 WatchSource:0}: Error finding container b2950710935239045d4c07072223f4af44066d92147f3c0dc13eab7ce8fdfca5: Status 404 returned error can't find the container with id b2950710935239045d4c07072223f4af44066d92147f3c0dc13eab7ce8fdfca5 Feb 16 21:34:24.481200 master-0 kubenswrapper[38936]: E0216 21:34:24.481105 38936 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fm9v8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-zrssz_openstack-operators(f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 21:34:24.482576 master-0 kubenswrapper[38936]: E0216 21:34:24.482503 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" podUID="f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6" Feb 16 21:34:24.498341 master-0 kubenswrapper[38936]: I0216 21:34:24.498229 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-snzb8"] Feb 16 21:34:24.617212 master-0 kubenswrapper[38936]: I0216 21:34:24.617100 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7"] Feb 16 21:34:25.047423 master-0 kubenswrapper[38936]: I0216 21:34:25.047338 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-snzb8" event={"ID":"77211113-7f6f-40ab-aaa9-71a3073d82c8","Type":"ContainerStarted","Data":"b2950710935239045d4c07072223f4af44066d92147f3c0dc13eab7ce8fdfca5"} Feb 16 21:34:25.049894 master-0 kubenswrapper[38936]: I0216 21:34:25.049594 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz" event={"ID":"a35ee94a-66a1-404d-981b-4e4426e9929d","Type":"ContainerStarted","Data":"9ba8959e397043bdd35335357e115379bd79f38609bd12741276931bfd09d533"} Feb 16 21:34:25.053372 master-0 kubenswrapper[38936]: I0216 21:34:25.053324 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp" event={"ID":"f4fd52b3-5d16-4bac-aa01-cb65615df27d","Type":"ContainerStarted","Data":"871993c8b74051005bc114d9fe35782d625a454aadb4bbf2012874ac8190c7cd"} Feb 16 21:34:25.055300 master-0 kubenswrapper[38936]: I0216 21:34:25.055244 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx" event={"ID":"d7baf5cf-175c-457a-95d4-530fc8679f0d","Type":"ContainerStarted","Data":"1cad494dcad17215478e186670a8e8243723da0e72a39013d23d329a4e5fd356"} Feb 16 21:34:25.057307 master-0 kubenswrapper[38936]: I0216 21:34:25.057235 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw" event={"ID":"a17c0204-8569-4821-88e4-c4fad31fdf6f","Type":"ContainerStarted","Data":"248f460c146c4d5d75a1663c899fa58a947361867c8ef2f48caf4c40d198dbda"} Feb 16 21:34:25.060039 master-0 kubenswrapper[38936]: I0216 21:34:25.059987 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g" event={"ID":"be794dfc-6441-4acf-b84c-2f0a2ed8d090","Type":"ContainerStarted","Data":"5b3978be18c669d64dfb652bea0689d1a9a3bdd776e01781f794b5585e56f1e8"} Feb 16 21:34:25.063440 master-0 kubenswrapper[38936]: I0216 21:34:25.063392 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l" event={"ID":"5e7149a7-9bde-4512-8b3f-008108c493a4","Type":"ContainerStarted","Data":"b04f4077f5a927b9a2420d8a68506a5822ec7ca839f4099665f7fb4357ce777f"} Feb 16 21:34:25.065183 master-0 kubenswrapper[38936]: I0216 21:34:25.065125 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" event={"ID":"f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6","Type":"ContainerStarted","Data":"38a956fa1196e61a584bd760d82ee00f571ba782584d4055360c5a4ca91fac1c"} Feb 16 21:34:25.067560 master-0 kubenswrapper[38936]: E0216 21:34:25.067490 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" podUID="f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6" Feb 16 21:34:25.175527 master-0 kubenswrapper[38936]: I0216 21:34:25.175458 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:25.175732 master-0 kubenswrapper[38936]: E0216 21:34:25.175620 38936 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:25.175732 master-0 kubenswrapper[38936]: E0216 21:34:25.175712 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert podName:911d5a9a-3dfb-4345-a53a-901075360f91 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:29.175690362 +0000 UTC m=+699.527693724 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert") pod "infra-operator-controller-manager-5f879c76b6-ns6pz" (UID: "911d5a9a-3dfb-4345-a53a-901075360f91") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:25.525988 master-0 kubenswrapper[38936]: W0216 21:34:25.525889 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07f73506_a58b_4c0f_af8e_319e69827880.slice/crio-9d67690d9222a0a97f9c67145c271de9d13fea4407240323fe3fbd2ed7dc8fe7 WatchSource:0}: Error finding container 9d67690d9222a0a97f9c67145c271de9d13fea4407240323fe3fbd2ed7dc8fe7: Status 404 returned error can't find the container with id 9d67690d9222a0a97f9c67145c271de9d13fea4407240323fe3fbd2ed7dc8fe7 Feb 16 21:34:25.998188 master-0 kubenswrapper[38936]: I0216 21:34:25.998109 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:25.998873 master-0 kubenswrapper[38936]: E0216 21:34:25.998596 38936 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:25.998873 master-0 kubenswrapper[38936]: E0216 21:34:25.998670 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert podName:89a5a16e-f92a-4878-ac85-9f4ca6b13354 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:29.998641775 +0000 UTC m=+700.350645137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" (UID: "89a5a16e-f92a-4878-ac85-9f4ca6b13354") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:26.079982 master-0 kubenswrapper[38936]: I0216 21:34:26.079925 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7" event={"ID":"07f73506-a58b-4c0f-af8e-319e69827880","Type":"ContainerStarted","Data":"9d67690d9222a0a97f9c67145c271de9d13fea4407240323fe3fbd2ed7dc8fe7"} Feb 16 21:34:26.082422 master-0 kubenswrapper[38936]: E0216 21:34:26.082383 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" podUID="f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6" Feb 16 21:34:26.305140 master-0 kubenswrapper[38936]: I0216 21:34:26.305039 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:26.305404 master-0 kubenswrapper[38936]: I0216 21:34:26.305230 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:26.305404 master-0 kubenswrapper[38936]: E0216 21:34:26.305333 38936 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:34:26.305521 master-0 kubenswrapper[38936]: E0216 21:34:26.305475 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:30.305444499 +0000 UTC m=+700.657448071 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "webhook-server-cert" not found Feb 16 21:34:26.305683 master-0 kubenswrapper[38936]: E0216 21:34:26.305626 38936 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:34:26.305809 master-0 kubenswrapper[38936]: E0216 21:34:26.305743 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:30.305713906 +0000 UTC m=+700.657717488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "metrics-server-cert" not found Feb 16 21:34:29.273396 master-0 kubenswrapper[38936]: I0216 21:34:29.273322 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:29.274004 master-0 kubenswrapper[38936]: E0216 21:34:29.273531 38936 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:29.274004 master-0 kubenswrapper[38936]: E0216 21:34:29.273638 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert podName:911d5a9a-3dfb-4345-a53a-901075360f91 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:37.273613337 +0000 UTC m=+707.625616699 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert") pod "infra-operator-controller-manager-5f879c76b6-ns6pz" (UID: "911d5a9a-3dfb-4345-a53a-901075360f91") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:30.094432 master-0 kubenswrapper[38936]: I0216 21:34:30.094365 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:30.094703 master-0 kubenswrapper[38936]: E0216 21:34:30.094613 38936 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:30.094703 master-0 kubenswrapper[38936]: E0216 21:34:30.094677 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert podName:89a5a16e-f92a-4878-ac85-9f4ca6b13354 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:38.094644679 +0000 UTC m=+708.446648041 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" (UID: "89a5a16e-f92a-4878-ac85-9f4ca6b13354") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:30.402235 master-0 kubenswrapper[38936]: I0216 21:34:30.402107 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:30.402235 master-0 kubenswrapper[38936]: I0216 21:34:30.402210 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:30.402810 master-0 kubenswrapper[38936]: E0216 21:34:30.402267 38936 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:34:30.402810 master-0 kubenswrapper[38936]: E0216 21:34:30.402541 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:38.402524761 +0000 UTC m=+708.754528123 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "webhook-server-cert" not found Feb 16 21:34:30.403226 master-0 kubenswrapper[38936]: E0216 21:34:30.402566 38936 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:34:30.404410 master-0 kubenswrapper[38936]: E0216 21:34:30.404386 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:38.404359381 +0000 UTC m=+708.756362793 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "metrics-server-cert" not found Feb 16 21:34:37.367881 master-0 kubenswrapper[38936]: I0216 21:34:37.367826 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:37.368681 master-0 kubenswrapper[38936]: E0216 21:34:37.368040 38936 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:37.368681 master-0 kubenswrapper[38936]: E0216 21:34:37.368136 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert podName:911d5a9a-3dfb-4345-a53a-901075360f91 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:53.368112727 +0000 UTC m=+723.720116089 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert") pod "infra-operator-controller-manager-5f879c76b6-ns6pz" (UID: "911d5a9a-3dfb-4345-a53a-901075360f91") : secret "infra-operator-webhook-server-cert" not found Feb 16 21:34:38.188118 master-0 kubenswrapper[38936]: I0216 21:34:38.188052 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:38.188750 master-0 kubenswrapper[38936]: E0216 21:34:38.188688 38936 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:38.188814 master-0 kubenswrapper[38936]: E0216 21:34:38.188800 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert podName:89a5a16e-f92a-4878-ac85-9f4ca6b13354 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:54.188779837 +0000 UTC m=+724.540783199 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" (UID: "89a5a16e-f92a-4878-ac85-9f4ca6b13354") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 21:34:38.494325 master-0 kubenswrapper[38936]: I0216 21:34:38.493935 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:38.494325 master-0 kubenswrapper[38936]: E0216 21:34:38.494110 38936 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 21:34:38.494325 master-0 kubenswrapper[38936]: I0216 21:34:38.494152 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:38.494325 master-0 kubenswrapper[38936]: E0216 21:34:38.494178 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:54.494157511 +0000 UTC m=+724.846160873 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "metrics-server-cert" not found Feb 16 21:34:38.494325 master-0 kubenswrapper[38936]: E0216 21:34:38.494236 38936 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 21:34:38.494325 master-0 kubenswrapper[38936]: E0216 21:34:38.494280 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs podName:ed428069-b441-461f-8a06-ce8958277227 nodeName:}" failed. No retries permitted until 2026-02-16 21:34:54.494268694 +0000 UTC m=+724.846272056 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mnfgd" (UID: "ed428069-b441-461f-8a06-ce8958277227") : secret "webhook-server-cert" not found Feb 16 21:34:45.364795 master-0 kubenswrapper[38936]: I0216 21:34:45.364481 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws" event={"ID":"57d3af01-3b13-4d0d-aebf-43d07aea3461","Type":"ContainerStarted","Data":"9b3d6f8c2714139e7a0767a323f2f7bdd0705f6ab64f2d3af8a2103f92479bf1"} Feb 16 21:34:45.365507 master-0 kubenswrapper[38936]: I0216 21:34:45.365487 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws" Feb 16 21:34:45.375269 master-0 kubenswrapper[38936]: I0216 21:34:45.375210 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb" event={"ID":"891bb5c8-0e45-4e99-8384-6c24700f5251","Type":"ContainerStarted","Data":"6fca0cafab8883888570989eaae30a92d7d352658dbb8e896c4a86a8ab1cb75f"} Feb 16 21:34:45.376977 master-0 kubenswrapper[38936]: I0216 21:34:45.376949 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb" Feb 16 21:34:45.378569 master-0 kubenswrapper[38936]: I0216 21:34:45.378537 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr" event={"ID":"18add596-a9c3-4e94-ad01-39d363025d52","Type":"ContainerStarted","Data":"6ca52b268689e7dbd240923b02411b6e80980725c2e5fa9d07ec1dadf108b944"} Feb 16 21:34:45.379241 master-0 kubenswrapper[38936]: I0216 21:34:45.379222 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr" Feb 16 21:34:45.382678 master-0 kubenswrapper[38936]: I0216 21:34:45.382334 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw" event={"ID":"a17c0204-8569-4821-88e4-c4fad31fdf6f","Type":"ContainerStarted","Data":"5b4bcc4bfed327bf851ad8f2eb7a272a7af5fccee29e57a543eb8ec3b333261e"} Feb 16 21:34:45.382956 master-0 kubenswrapper[38936]: I0216 21:34:45.382911 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw" Feb 16 21:34:45.388806 master-0 kubenswrapper[38936]: I0216 21:34:45.388755 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq" event={"ID":"b6fbc679-8e3b-48e1-85f0-87f35b7dc9e2","Type":"ContainerStarted","Data":"e83644cef0614f8bd05be8e626b24350d346b23d1a0b40df07322067e14f6285"} Feb 16 21:34:45.389179 master-0 kubenswrapper[38936]: I0216 21:34:45.389153 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq" Feb 16 21:34:45.413186 master-0 kubenswrapper[38936]: I0216 21:34:45.413109 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws" podStartSLOduration=7.970881897 podStartE2EDuration="24.412962572s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:23.466733744 +0000 UTC m=+693.818737106" lastFinishedPulling="2026-02-16 21:34:39.908814429 +0000 UTC m=+710.260817781" observedRunningTime="2026-02-16 21:34:45.400337025 +0000 UTC m=+715.752340387" watchObservedRunningTime="2026-02-16 21:34:45.412962572 +0000 UTC m=+715.764965924" Feb 16 21:34:45.419918 master-0 kubenswrapper[38936]: I0216 21:34:45.419863 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x" event={"ID":"12bd49dd-67b9-467b-859b-1388b9882681","Type":"ContainerStarted","Data":"4f0249fa960f66ba934f2a04bc18b55826c9f40473cab3bd09ca4221a4cefdb7"} Feb 16 21:34:45.421317 master-0 kubenswrapper[38936]: I0216 21:34:45.420503 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x" Feb 16 21:34:45.471106 master-0 kubenswrapper[38936]: I0216 21:34:45.471050 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-snzb8" event={"ID":"77211113-7f6f-40ab-aaa9-71a3073d82c8","Type":"ContainerStarted","Data":"0e291262a5f80c9197db59fa92ac1f66a89ad509091cd4162c23868fe9d48462"} Feb 16 21:34:45.471385 master-0 kubenswrapper[38936]: I0216 21:34:45.471367 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-snzb8" Feb 16 21:34:45.482812 master-0 kubenswrapper[38936]: I0216 21:34:45.475711 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq" podStartSLOduration=5.651212671 podStartE2EDuration="24.475683374s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:23.455536856 +0000 UTC m=+693.807540218" lastFinishedPulling="2026-02-16 21:34:42.280007539 +0000 UTC m=+712.632010921" observedRunningTime="2026-02-16 21:34:45.468042245 +0000 UTC m=+715.820045607" watchObservedRunningTime="2026-02-16 21:34:45.475683374 +0000 UTC m=+715.827686736" Feb 16 21:34:45.524675 master-0 kubenswrapper[38936]: I0216 21:34:45.520586 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz" event={"ID":"a35ee94a-66a1-404d-981b-4e4426e9929d","Type":"ContainerStarted","Data":"4ee34f087e128672e0c943e0a8b9f6c9e5f638dd0d72f436513599252d2c7965"} Feb 16 21:34:45.524675 master-0 kubenswrapper[38936]: I0216 21:34:45.521004 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr" podStartSLOduration=9.478706838 podStartE2EDuration="25.520974728s" podCreationTimestamp="2026-02-16 21:34:20 +0000 UTC" firstStartedPulling="2026-02-16 21:34:22.312801953 +0000 UTC m=+692.664805315" lastFinishedPulling="2026-02-16 21:34:38.355069843 +0000 UTC m=+708.707073205" observedRunningTime="2026-02-16 21:34:45.520557926 +0000 UTC m=+715.872561278" watchObservedRunningTime="2026-02-16 21:34:45.520974728 +0000 UTC m=+715.872978090" Feb 16 21:34:45.524675 master-0 kubenswrapper[38936]: I0216 21:34:45.521754 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz" Feb 16 21:34:45.549677 master-0 kubenswrapper[38936]: I0216 21:34:45.547144 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l" event={"ID":"5e7149a7-9bde-4512-8b3f-008108c493a4","Type":"ContainerStarted","Data":"d291f047a8b73e59a690362877df2391a0e8732a9c62907efc0be37e0a820279"} Feb 16 21:34:45.549677 master-0 kubenswrapper[38936]: I0216 21:34:45.547745 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l" Feb 16 21:34:45.556309 master-0 kubenswrapper[38936]: I0216 21:34:45.556253 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" event={"ID":"f01c68d6-8f2c-4b63-9eca-74d79b1d1ef6","Type":"ContainerStarted","Data":"a79b87e8a56afd2976382dc851664175afe552234eb666d74ec1cea86dc9586f"} Feb 16 21:34:45.558848 master-0 kubenswrapper[38936]: I0216 21:34:45.557801 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" Feb 16 21:34:45.563705 master-0 kubenswrapper[38936]: I0216 21:34:45.562643 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp" event={"ID":"e2287eff-31d9-4737-b884-f369344e2b02","Type":"ContainerStarted","Data":"8519de0a7bc77c5a01efb7baf09487cbd2a5eb37ef908ba68a3a41f600eb0dfb"} Feb 16 21:34:45.563705 master-0 kubenswrapper[38936]: I0216 21:34:45.562827 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp" Feb 16 21:34:45.579236 master-0 kubenswrapper[38936]: I0216 21:34:45.575332 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr" event={"ID":"721853c8-a888-4e6d-8647-31bf57a0b9cb","Type":"ContainerStarted","Data":"676db4d1f431856d05f356afd3e4f1dc7698836da36c38f805055b7862518e8d"} Feb 16 21:34:45.582460 master-0 kubenswrapper[38936]: I0216 21:34:45.580089 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr" Feb 16 21:34:45.600174 master-0 kubenswrapper[38936]: I0216 21:34:45.597195 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw" podStartSLOduration=4.641062119 podStartE2EDuration="24.597161109s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:24.4201877 +0000 UTC m=+694.772191062" lastFinishedPulling="2026-02-16 21:34:44.37628669 +0000 UTC m=+714.728290052" observedRunningTime="2026-02-16 21:34:45.579895226 +0000 UTC m=+715.931898588" watchObservedRunningTime="2026-02-16 21:34:45.597161109 +0000 UTC m=+715.949164471" Feb 16 21:34:45.600174 master-0 kubenswrapper[38936]: I0216 21:34:45.597240 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qbf42" event={"ID":"32cafbe9-c7a9-4737-9b4b-3d5e46779d3d","Type":"ContainerStarted","Data":"429a91441ce26da754e08e075b382fe7d270a5393259888706bd426c16b09a6c"} Feb 16 21:34:45.600174 master-0 kubenswrapper[38936]: I0216 21:34:45.597443 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qbf42" Feb 16 21:34:45.623005 master-0 kubenswrapper[38936]: I0216 21:34:45.618171 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk" event={"ID":"357c2b7e-6999-44d6-b5dc-57fe20c0ae75","Type":"ContainerStarted","Data":"4064115ea0ecde8bee605537bd36f0a2fde2161d560f754e7298f5d644dacc7a"} Feb 16 21:34:45.623005 master-0 kubenswrapper[38936]: I0216 21:34:45.619148 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk" Feb 16 21:34:45.640508 master-0 kubenswrapper[38936]: I0216 21:34:45.637204 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb" podStartSLOduration=4.73998796 podStartE2EDuration="25.637165137s" podCreationTimestamp="2026-02-16 21:34:20 +0000 UTC" firstStartedPulling="2026-02-16 21:34:22.378446216 +0000 UTC m=+692.730449568" lastFinishedPulling="2026-02-16 21:34:43.275623393 +0000 UTC m=+713.627626745" observedRunningTime="2026-02-16 21:34:45.630881145 +0000 UTC m=+715.982884497" watchObservedRunningTime="2026-02-16 21:34:45.637165137 +0000 UTC m=+715.989168499" Feb 16 21:34:45.728678 master-0 kubenswrapper[38936]: I0216 21:34:45.722034 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz" podStartSLOduration=4.818189402 podStartE2EDuration="24.721997166s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:24.474420479 +0000 UTC m=+694.826423841" lastFinishedPulling="2026-02-16 21:34:44.378228243 +0000 UTC m=+714.730231605" observedRunningTime="2026-02-16 21:34:45.70901202 +0000 UTC m=+716.061015382" watchObservedRunningTime="2026-02-16 21:34:45.721997166 +0000 UTC m=+716.074000528" Feb 16 21:34:45.796697 master-0 kubenswrapper[38936]: I0216 21:34:45.794667 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk" podStartSLOduration=9.253039567 podStartE2EDuration="24.79462088s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:22.81349055 +0000 UTC m=+693.165493912" lastFinishedPulling="2026-02-16 21:34:38.355071863 +0000 UTC m=+708.707075225" observedRunningTime="2026-02-16 21:34:45.761038688 +0000 UTC m=+716.113042050" watchObservedRunningTime="2026-02-16 21:34:45.79462088 +0000 UTC m=+716.146624242" Feb 16 21:34:45.829016 master-0 kubenswrapper[38936]: I0216 21:34:45.823297 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" podStartSLOduration=4.864442851 podStartE2EDuration="24.823272186s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:24.480890626 +0000 UTC m=+694.832893988" lastFinishedPulling="2026-02-16 21:34:44.439719961 +0000 UTC m=+714.791723323" observedRunningTime="2026-02-16 21:34:45.791738531 +0000 UTC m=+716.143741893" watchObservedRunningTime="2026-02-16 21:34:45.823272186 +0000 UTC m=+716.175275548" Feb 16 21:34:45.856294 master-0 kubenswrapper[38936]: I0216 21:34:45.852636 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr" podStartSLOduration=4.222015194 podStartE2EDuration="24.852611452s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:23.745722613 +0000 UTC m=+694.097725975" lastFinishedPulling="2026-02-16 21:34:44.376318871 +0000 UTC m=+714.728322233" observedRunningTime="2026-02-16 21:34:45.836185892 +0000 UTC m=+716.188189254" watchObservedRunningTime="2026-02-16 21:34:45.852611452 +0000 UTC m=+716.204614804" Feb 16 21:34:45.888851 master-0 kubenswrapper[38936]: I0216 21:34:45.887308 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-snzb8" podStartSLOduration=4.991288635 podStartE2EDuration="24.887280714s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:24.48063589 +0000 UTC m=+694.832639252" lastFinishedPulling="2026-02-16 21:34:44.376627969 +0000 UTC m=+714.728631331" observedRunningTime="2026-02-16 21:34:45.8860392 +0000 UTC m=+716.238042562" watchObservedRunningTime="2026-02-16 21:34:45.887280714 +0000 UTC m=+716.239284076" Feb 16 21:34:45.943627 master-0 kubenswrapper[38936]: I0216 21:34:45.943534 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp" podStartSLOduration=4.329929557 podStartE2EDuration="24.943504707s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:23.729297983 +0000 UTC m=+694.081301345" lastFinishedPulling="2026-02-16 21:34:44.342873133 +0000 UTC m=+714.694876495" observedRunningTime="2026-02-16 21:34:45.942358166 +0000 UTC m=+716.294361528" watchObservedRunningTime="2026-02-16 21:34:45.943504707 +0000 UTC m=+716.295508069" Feb 16 21:34:45.997510 master-0 kubenswrapper[38936]: I0216 21:34:45.997395 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x" podStartSLOduration=4.617610305 podStartE2EDuration="24.997365766s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:23.474460956 +0000 UTC m=+693.826464318" lastFinishedPulling="2026-02-16 21:34:43.854216417 +0000 UTC m=+714.206219779" observedRunningTime="2026-02-16 21:34:45.98874387 +0000 UTC m=+716.340747242" watchObservedRunningTime="2026-02-16 21:34:45.997365766 +0000 UTC m=+716.349369128" Feb 16 21:34:46.081669 master-0 kubenswrapper[38936]: I0216 21:34:46.080085 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qbf42" podStartSLOduration=5.283974219 podStartE2EDuration="25.080062067s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:23.479687149 +0000 UTC m=+693.831690511" lastFinishedPulling="2026-02-16 21:34:43.275774957 +0000 UTC m=+713.627778359" observedRunningTime="2026-02-16 21:34:46.075723198 +0000 UTC m=+716.427726560" watchObservedRunningTime="2026-02-16 21:34:46.080062067 +0000 UTC m=+716.432065429" Feb 16 21:34:46.134766 master-0 kubenswrapper[38936]: I0216 21:34:46.130508 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l" podStartSLOduration=6.4414299360000005 podStartE2EDuration="25.130480811s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:24.070259583 +0000 UTC m=+694.422262955" lastFinishedPulling="2026-02-16 21:34:42.759310468 +0000 UTC m=+713.111313830" observedRunningTime="2026-02-16 21:34:46.106026969 +0000 UTC m=+716.458030341" watchObservedRunningTime="2026-02-16 21:34:46.130480811 +0000 UTC m=+716.482484173" Feb 16 21:34:46.643495 master-0 kubenswrapper[38936]: I0216 21:34:46.643377 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx" event={"ID":"d7baf5cf-175c-457a-95d4-530fc8679f0d","Type":"ContainerStarted","Data":"abac3589d96a530f4bf84c674f5227cb44c2fa6a736d3b1c4050c63b278bc075"} Feb 16 21:34:46.644054 master-0 kubenswrapper[38936]: I0216 21:34:46.643531 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx" Feb 16 21:34:46.645473 master-0 kubenswrapper[38936]: I0216 21:34:46.645411 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7" event={"ID":"07f73506-a58b-4c0f-af8e-319e69827880","Type":"ContainerStarted","Data":"c93488bd094ca54ace255df87fd0f013a7de5619f52f9021d33de177e8ffe736"} Feb 16 21:34:46.647530 master-0 kubenswrapper[38936]: I0216 21:34:46.647499 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-54t98" event={"ID":"a8bf004e-2095-4e74-943b-1c724e78a4aa","Type":"ContainerStarted","Data":"f9906fce7f4ffc2a8f7bd06932e7e524778f03f0e4704a0ba23fc3c380408f62"} Feb 16 21:34:46.647971 master-0 kubenswrapper[38936]: I0216 21:34:46.647948 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-54t98" Feb 16 21:34:46.649643 master-0 kubenswrapper[38936]: I0216 21:34:46.649574 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g" event={"ID":"be794dfc-6441-4acf-b84c-2f0a2ed8d090","Type":"ContainerStarted","Data":"ac6a57d016d5147e822e35afa8a00fa884fb78b5446f9d6483ef0de86fae98a4"} Feb 16 21:34:46.649778 master-0 kubenswrapper[38936]: I0216 21:34:46.649740 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g" Feb 16 21:34:46.651410 master-0 kubenswrapper[38936]: I0216 21:34:46.651377 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp" event={"ID":"f4fd52b3-5d16-4bac-aa01-cb65615df27d","Type":"ContainerStarted","Data":"fd63f4d313f465f7dede3ca3a0d2d2744cf4c4336c71c9495a66b950d35994f3"} Feb 16 21:34:46.651527 master-0 kubenswrapper[38936]: I0216 21:34:46.651495 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp" Feb 16 21:34:46.652937 master-0 kubenswrapper[38936]: I0216 21:34:46.652876 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6" event={"ID":"cac792f3-e1bd-496f-9fa0-709907e97b0b","Type":"ContainerStarted","Data":"017cbb73ccba03927570b4a45c02488bfae523d809f7fa0ad71f75fe8f11c7d1"} Feb 16 21:34:46.737679 master-0 kubenswrapper[38936]: I0216 21:34:46.737535 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx" podStartSLOduration=5.380430866 podStartE2EDuration="25.737508865s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:24.083195118 +0000 UTC m=+694.435198480" lastFinishedPulling="2026-02-16 21:34:44.440273117 +0000 UTC m=+714.792276479" observedRunningTime="2026-02-16 21:34:46.670948479 +0000 UTC m=+717.022951841" watchObservedRunningTime="2026-02-16 21:34:46.737508865 +0000 UTC m=+717.089512227" Feb 16 21:34:46.795678 master-0 kubenswrapper[38936]: I0216 21:34:46.794301 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7" podStartSLOduration=6.872516211 podStartE2EDuration="25.794272574s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:25.529077654 +0000 UTC m=+695.881081016" lastFinishedPulling="2026-02-16 21:34:44.450834017 +0000 UTC m=+714.802837379" observedRunningTime="2026-02-16 21:34:46.709295031 +0000 UTC m=+717.061298403" watchObservedRunningTime="2026-02-16 21:34:46.794272574 +0000 UTC m=+717.146275936" Feb 16 21:34:46.809690 master-0 kubenswrapper[38936]: I0216 21:34:46.809206 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp" podStartSLOduration=5.498813008 podStartE2EDuration="25.809182664s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:24.065929254 +0000 UTC m=+694.417932616" lastFinishedPulling="2026-02-16 21:34:44.37629891 +0000 UTC m=+714.728302272" observedRunningTime="2026-02-16 21:34:46.743518741 +0000 UTC m=+717.095522103" watchObservedRunningTime="2026-02-16 21:34:46.809182664 +0000 UTC m=+717.161186046" Feb 16 21:34:46.836682 master-0 kubenswrapper[38936]: I0216 21:34:46.835116 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6" podStartSLOduration=5.18637835 podStartE2EDuration="25.835088375s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:23.733155328 +0000 UTC m=+694.085158690" lastFinishedPulling="2026-02-16 21:34:44.381865363 +0000 UTC m=+714.733868715" observedRunningTime="2026-02-16 21:34:46.767563001 +0000 UTC m=+717.119566363" watchObservedRunningTime="2026-02-16 21:34:46.835088375 +0000 UTC m=+717.187091757" Feb 16 21:34:46.850677 master-0 kubenswrapper[38936]: I0216 21:34:46.847868 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-54t98" podStartSLOduration=5.197191207 podStartE2EDuration="25.847841465s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:23.726056143 +0000 UTC m=+694.078059515" lastFinishedPulling="2026-02-16 21:34:44.376706411 +0000 UTC m=+714.728709773" observedRunningTime="2026-02-16 21:34:46.817688507 +0000 UTC m=+717.169691869" watchObservedRunningTime="2026-02-16 21:34:46.847841465 +0000 UTC m=+717.199844827" Feb 16 21:34:46.880681 master-0 kubenswrapper[38936]: I0216 21:34:46.877086 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g" podStartSLOduration=5.586613758 podStartE2EDuration="25.877060507s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:24.085833221 +0000 UTC m=+694.437836603" lastFinishedPulling="2026-02-16 21:34:44.37627999 +0000 UTC m=+714.728283352" observedRunningTime="2026-02-16 21:34:46.846189099 +0000 UTC m=+717.198192461" watchObservedRunningTime="2026-02-16 21:34:46.877060507 +0000 UTC m=+717.229063859" Feb 16 21:34:47.662359 master-0 kubenswrapper[38936]: I0216 21:34:47.662265 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6" Feb 16 21:34:51.397321 master-0 kubenswrapper[38936]: I0216 21:34:51.397208 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr" Feb 16 21:34:51.431819 master-0 kubenswrapper[38936]: I0216 21:34:51.431749 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb" Feb 16 21:34:51.590987 master-0 kubenswrapper[38936]: I0216 21:34:51.590799 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk" Feb 16 21:34:51.670527 master-0 kubenswrapper[38936]: I0216 21:34:51.670349 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qbf42" Feb 16 21:34:51.688707 master-0 kubenswrapper[38936]: I0216 21:34:51.684001 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x" Feb 16 21:34:51.712694 master-0 kubenswrapper[38936]: I0216 21:34:51.710580 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws" Feb 16 21:34:51.824586 master-0 kubenswrapper[38936]: I0216 21:34:51.824526 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq" Feb 16 21:34:51.838078 master-0 kubenswrapper[38936]: I0216 21:34:51.838022 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6" Feb 16 21:34:51.976472 master-0 kubenswrapper[38936]: I0216 21:34:51.976330 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-54t98" Feb 16 21:34:52.149638 master-0 kubenswrapper[38936]: I0216 21:34:52.149564 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp" Feb 16 21:34:52.349760 master-0 kubenswrapper[38936]: I0216 21:34:52.349637 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr" Feb 16 21:34:52.353493 master-0 kubenswrapper[38936]: I0216 21:34:52.353440 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx" Feb 16 21:34:52.605804 master-0 kubenswrapper[38936]: I0216 21:34:52.605683 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l" Feb 16 21:34:52.701674 master-0 kubenswrapper[38936]: I0216 21:34:52.698320 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g" Feb 16 21:34:52.720001 master-0 kubenswrapper[38936]: I0216 21:34:52.719945 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp" Feb 16 21:34:52.751824 master-0 kubenswrapper[38936]: I0216 21:34:52.751669 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz" Feb 16 21:34:52.787174 master-0 kubenswrapper[38936]: I0216 21:34:52.785159 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz" Feb 16 21:34:52.798398 master-0 kubenswrapper[38936]: I0216 21:34:52.798339 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-snzb8" Feb 16 21:34:52.830695 master-0 kubenswrapper[38936]: I0216 21:34:52.829062 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw" Feb 16 21:34:53.418954 master-0 kubenswrapper[38936]: I0216 21:34:53.418878 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:53.422499 master-0 kubenswrapper[38936]: I0216 21:34:53.422434 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/911d5a9a-3dfb-4345-a53a-901075360f91-cert\") pod \"infra-operator-controller-manager-5f879c76b6-ns6pz\" (UID: \"911d5a9a-3dfb-4345-a53a-901075360f91\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:53.579171 master-0 kubenswrapper[38936]: I0216 21:34:53.579109 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:54.206688 master-0 kubenswrapper[38936]: I0216 21:34:54.206217 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz"] Feb 16 21:34:54.239223 master-0 kubenswrapper[38936]: I0216 21:34:54.239169 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:54.242845 master-0 kubenswrapper[38936]: I0216 21:34:54.242670 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/89a5a16e-f92a-4878-ac85-9f4ca6b13354-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c\" (UID: \"89a5a16e-f92a-4878-ac85-9f4ca6b13354\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:54.429709 master-0 kubenswrapper[38936]: I0216 21:34:54.429643 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:54.543071 master-0 kubenswrapper[38936]: I0216 21:34:54.543015 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:54.543224 master-0 kubenswrapper[38936]: I0216 21:34:54.543132 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:54.547251 master-0 kubenswrapper[38936]: I0216 21:34:54.547200 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:54.558082 master-0 kubenswrapper[38936]: I0216 21:34:54.557320 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed428069-b441-461f-8a06-ce8958277227-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mnfgd\" (UID: \"ed428069-b441-461f-8a06-ce8958277227\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:54.630562 master-0 kubenswrapper[38936]: I0216 21:34:54.630038 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:54.751496 master-0 kubenswrapper[38936]: I0216 21:34:54.751421 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" event={"ID":"911d5a9a-3dfb-4345-a53a-901075360f91","Type":"ContainerStarted","Data":"3a9f3e6332e29a3b9e8aad31430aa9f4c327d7b92e9c595ccdf1d1f013693f75"} Feb 16 21:34:54.960628 master-0 kubenswrapper[38936]: W0216 21:34:54.960575 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89a5a16e_f92a_4878_ac85_9f4ca6b13354.slice/crio-fde5ed44812fa4b392035ebff1639746ac64d526ad537160fcbcc6abbfe00164 WatchSource:0}: Error finding container fde5ed44812fa4b392035ebff1639746ac64d526ad537160fcbcc6abbfe00164: Status 404 returned error can't find the container with id fde5ed44812fa4b392035ebff1639746ac64d526ad537160fcbcc6abbfe00164 Feb 16 21:34:54.976487 master-0 kubenswrapper[38936]: I0216 21:34:54.976430 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c"] Feb 16 21:34:55.228800 master-0 kubenswrapper[38936]: I0216 21:34:55.228736 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd"] Feb 16 21:34:55.789568 master-0 kubenswrapper[38936]: I0216 21:34:55.787067 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" event={"ID":"ed428069-b441-461f-8a06-ce8958277227","Type":"ContainerStarted","Data":"00ffd956eba60d32861f5b977837b0e4ff8b748da981e0c7ff1e2569440fd397"} Feb 16 21:34:55.789568 master-0 kubenswrapper[38936]: I0216 21:34:55.787138 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" event={"ID":"ed428069-b441-461f-8a06-ce8958277227","Type":"ContainerStarted","Data":"0f568ed0388963e57b1c0768496474f187ed2d3671be3e7f00951c2119c9217a"} Feb 16 21:34:55.789568 master-0 kubenswrapper[38936]: I0216 21:34:55.787178 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:34:55.789568 master-0 kubenswrapper[38936]: I0216 21:34:55.789349 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" event={"ID":"89a5a16e-f92a-4878-ac85-9f4ca6b13354","Type":"ContainerStarted","Data":"fde5ed44812fa4b392035ebff1639746ac64d526ad537160fcbcc6abbfe00164"} Feb 16 21:34:55.839053 master-0 kubenswrapper[38936]: I0216 21:34:55.838938 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" podStartSLOduration=34.83891316 podStartE2EDuration="34.83891316s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:34:55.82435608 +0000 UTC m=+726.176359442" watchObservedRunningTime="2026-02-16 21:34:55.83891316 +0000 UTC m=+726.190916522" Feb 16 21:34:57.822230 master-0 kubenswrapper[38936]: I0216 21:34:57.822136 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" event={"ID":"911d5a9a-3dfb-4345-a53a-901075360f91","Type":"ContainerStarted","Data":"9dcd7c21b84f21ba6d57cb24f8b511900fc019e94c05a8e2abfd723588705226"} Feb 16 21:34:57.823640 master-0 kubenswrapper[38936]: I0216 21:34:57.823594 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:34:57.827164 master-0 kubenswrapper[38936]: I0216 21:34:57.827057 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" event={"ID":"89a5a16e-f92a-4878-ac85-9f4ca6b13354","Type":"ContainerStarted","Data":"beb5016486d1c895bead36d303a515736c790fed45b3034788f94a55f6699ed2"} Feb 16 21:34:57.827251 master-0 kubenswrapper[38936]: I0216 21:34:57.827205 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:34:57.888474 master-0 kubenswrapper[38936]: I0216 21:34:57.888353 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" podStartSLOduration=33.629061554 podStartE2EDuration="36.888330205s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:54.230956244 +0000 UTC m=+724.582959616" lastFinishedPulling="2026-02-16 21:34:57.490224895 +0000 UTC m=+727.842228267" observedRunningTime="2026-02-16 21:34:57.850003483 +0000 UTC m=+728.202006845" watchObservedRunningTime="2026-02-16 21:34:57.888330205 +0000 UTC m=+728.240333567" Feb 16 21:34:57.890931 master-0 kubenswrapper[38936]: I0216 21:34:57.890864 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" podStartSLOduration=34.345968636 podStartE2EDuration="36.890847074s" podCreationTimestamp="2026-02-16 21:34:21 +0000 UTC" firstStartedPulling="2026-02-16 21:34:54.963548847 +0000 UTC m=+725.315552209" lastFinishedPulling="2026-02-16 21:34:57.508427285 +0000 UTC m=+727.860430647" observedRunningTime="2026-02-16 21:34:57.884590032 +0000 UTC m=+728.236593394" watchObservedRunningTime="2026-02-16 21:34:57.890847074 +0000 UTC m=+728.242850436" Feb 16 21:35:03.591161 master-0 kubenswrapper[38936]: I0216 21:35:03.591043 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz" Feb 16 21:35:04.441706 master-0 kubenswrapper[38936]: I0216 21:35:04.441572 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c" Feb 16 21:35:04.641991 master-0 kubenswrapper[38936]: I0216 21:35:04.641896 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd" Feb 16 21:35:42.480672 master-0 kubenswrapper[38936]: I0216 21:35:42.479709 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-ml4rt"] Feb 16 21:35:42.489673 master-0 kubenswrapper[38936]: I0216 21:35:42.481638 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" Feb 16 21:35:42.489673 master-0 kubenswrapper[38936]: I0216 21:35:42.488251 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 21:35:42.489673 master-0 kubenswrapper[38936]: I0216 21:35:42.488797 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 21:35:42.489673 master-0 kubenswrapper[38936]: I0216 21:35:42.489032 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 21:35:42.646334 master-0 kubenswrapper[38936]: I0216 21:35:42.646216 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-ml4rt"] Feb 16 21:35:42.663172 master-0 kubenswrapper[38936]: I0216 21:35:42.661852 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d3e957-9451-4feb-a578-4409217df9f1-config\") pod \"dnsmasq-dns-5c7b6fb887-ml4rt\" (UID: \"29d3e957-9451-4feb-a578-4409217df9f1\") " pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" Feb 16 21:35:42.663172 master-0 kubenswrapper[38936]: I0216 21:35:42.661940 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6ngc\" (UniqueName: \"kubernetes.io/projected/29d3e957-9451-4feb-a578-4409217df9f1-kube-api-access-j6ngc\") pod \"dnsmasq-dns-5c7b6fb887-ml4rt\" (UID: \"29d3e957-9451-4feb-a578-4409217df9f1\") " pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" Feb 16 21:35:42.706960 master-0 kubenswrapper[38936]: I0216 21:35:42.698396 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d78499c-fjmds"] Feb 16 21:35:42.706960 master-0 kubenswrapper[38936]: I0216 21:35:42.702447 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:35:42.712034 master-0 kubenswrapper[38936]: I0216 21:35:42.711950 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-fjmds"] Feb 16 21:35:42.726574 master-0 kubenswrapper[38936]: I0216 21:35:42.726514 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 21:35:42.763989 master-0 kubenswrapper[38936]: I0216 21:35:42.763905 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d3e957-9451-4feb-a578-4409217df9f1-config\") pod \"dnsmasq-dns-5c7b6fb887-ml4rt\" (UID: \"29d3e957-9451-4feb-a578-4409217df9f1\") " pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" Feb 16 21:35:42.764405 master-0 kubenswrapper[38936]: I0216 21:35:42.764150 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6ngc\" (UniqueName: \"kubernetes.io/projected/29d3e957-9451-4feb-a578-4409217df9f1-kube-api-access-j6ngc\") pod \"dnsmasq-dns-5c7b6fb887-ml4rt\" (UID: \"29d3e957-9451-4feb-a578-4409217df9f1\") " pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" Feb 16 21:35:42.764954 master-0 kubenswrapper[38936]: I0216 21:35:42.764907 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d3e957-9451-4feb-a578-4409217df9f1-config\") pod \"dnsmasq-dns-5c7b6fb887-ml4rt\" (UID: \"29d3e957-9451-4feb-a578-4409217df9f1\") " pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" Feb 16 21:35:42.780303 master-0 kubenswrapper[38936]: I0216 21:35:42.780238 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6ngc\" (UniqueName: \"kubernetes.io/projected/29d3e957-9451-4feb-a578-4409217df9f1-kube-api-access-j6ngc\") pod \"dnsmasq-dns-5c7b6fb887-ml4rt\" (UID: \"29d3e957-9451-4feb-a578-4409217df9f1\") " pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" Feb 16 21:35:42.866313 master-0 kubenswrapper[38936]: I0216 21:35:42.865716 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk92l\" (UniqueName: \"kubernetes.io/projected/99c6bec1-e16d-433a-bb6c-ccad436d357f-kube-api-access-hk92l\") pod \"dnsmasq-dns-7d78499c-fjmds\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:35:42.866313 master-0 kubenswrapper[38936]: I0216 21:35:42.865886 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-config\") pod \"dnsmasq-dns-7d78499c-fjmds\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:35:42.866895 master-0 kubenswrapper[38936]: I0216 21:35:42.866382 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-dns-svc\") pod \"dnsmasq-dns-7d78499c-fjmds\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:35:42.953443 master-0 kubenswrapper[38936]: I0216 21:35:42.953364 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" Feb 16 21:35:42.968333 master-0 kubenswrapper[38936]: I0216 21:35:42.968262 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk92l\" (UniqueName: \"kubernetes.io/projected/99c6bec1-e16d-433a-bb6c-ccad436d357f-kube-api-access-hk92l\") pod \"dnsmasq-dns-7d78499c-fjmds\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:35:42.968503 master-0 kubenswrapper[38936]: I0216 21:35:42.968427 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-config\") pod \"dnsmasq-dns-7d78499c-fjmds\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:35:42.968677 master-0 kubenswrapper[38936]: I0216 21:35:42.968625 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-dns-svc\") pod \"dnsmasq-dns-7d78499c-fjmds\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:35:42.970556 master-0 kubenswrapper[38936]: I0216 21:35:42.970512 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-dns-svc\") pod \"dnsmasq-dns-7d78499c-fjmds\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:35:42.970742 master-0 kubenswrapper[38936]: I0216 21:35:42.970669 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-config\") pod \"dnsmasq-dns-7d78499c-fjmds\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:35:42.992871 master-0 kubenswrapper[38936]: I0216 21:35:42.992803 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk92l\" (UniqueName: \"kubernetes.io/projected/99c6bec1-e16d-433a-bb6c-ccad436d357f-kube-api-access-hk92l\") pod \"dnsmasq-dns-7d78499c-fjmds\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:35:43.030064 master-0 kubenswrapper[38936]: I0216 21:35:43.027585 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:35:43.580331 master-0 kubenswrapper[38936]: I0216 21:35:43.580266 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-ml4rt"] Feb 16 21:35:43.662398 master-0 kubenswrapper[38936]: I0216 21:35:43.662316 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-fjmds"] Feb 16 21:35:43.663075 master-0 kubenswrapper[38936]: W0216 21:35:43.663017 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99c6bec1_e16d_433a_bb6c_ccad436d357f.slice/crio-61291f296fdc6c4c7a6c70757e3820ea694493213c42dca617041ae0c09af9d8 WatchSource:0}: Error finding container 61291f296fdc6c4c7a6c70757e3820ea694493213c42dca617041ae0c09af9d8: Status 404 returned error can't find the container with id 61291f296fdc6c4c7a6c70757e3820ea694493213c42dca617041ae0c09af9d8 Feb 16 21:35:44.342278 master-0 kubenswrapper[38936]: I0216 21:35:44.342168 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-fjmds" event={"ID":"99c6bec1-e16d-433a-bb6c-ccad436d357f","Type":"ContainerStarted","Data":"61291f296fdc6c4c7a6c70757e3820ea694493213c42dca617041ae0c09af9d8"} Feb 16 21:35:44.344824 master-0 kubenswrapper[38936]: I0216 21:35:44.344775 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" event={"ID":"29d3e957-9451-4feb-a578-4409217df9f1","Type":"ContainerStarted","Data":"671ba4fdf675bd1558b4b4294ad5e133ff51ac2ff49b04d148370833321af302"} Feb 16 21:35:44.538010 master-0 kubenswrapper[38936]: I0216 21:35:44.537821 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-ml4rt"] Feb 16 21:35:44.610553 master-0 kubenswrapper[38936]: I0216 21:35:44.609531 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-lmg4l"] Feb 16 21:35:44.614309 master-0 kubenswrapper[38936]: I0216 21:35:44.612782 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:35:44.642566 master-0 kubenswrapper[38936]: I0216 21:35:44.642506 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-lmg4l"] Feb 16 21:35:44.724677 master-0 kubenswrapper[38936]: I0216 21:35:44.724578 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-config\") pod \"dnsmasq-dns-5bcd98d69f-lmg4l\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:35:44.724887 master-0 kubenswrapper[38936]: I0216 21:35:44.724694 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkqnl\" (UniqueName: \"kubernetes.io/projected/76e203cf-4653-455c-beee-c382bec17645-kube-api-access-tkqnl\") pod \"dnsmasq-dns-5bcd98d69f-lmg4l\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:35:44.724887 master-0 kubenswrapper[38936]: I0216 21:35:44.724750 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-lmg4l\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:35:44.827417 master-0 kubenswrapper[38936]: I0216 21:35:44.827350 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-config\") pod \"dnsmasq-dns-5bcd98d69f-lmg4l\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:35:44.827906 master-0 kubenswrapper[38936]: I0216 21:35:44.827456 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkqnl\" (UniqueName: \"kubernetes.io/projected/76e203cf-4653-455c-beee-c382bec17645-kube-api-access-tkqnl\") pod \"dnsmasq-dns-5bcd98d69f-lmg4l\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:35:44.827906 master-0 kubenswrapper[38936]: I0216 21:35:44.827518 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-lmg4l\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:35:44.830405 master-0 kubenswrapper[38936]: I0216 21:35:44.830370 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-lmg4l\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:35:44.835388 master-0 kubenswrapper[38936]: I0216 21:35:44.830731 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-config\") pod \"dnsmasq-dns-5bcd98d69f-lmg4l\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:35:44.862877 master-0 kubenswrapper[38936]: I0216 21:35:44.862745 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkqnl\" (UniqueName: \"kubernetes.io/projected/76e203cf-4653-455c-beee-c382bec17645-kube-api-access-tkqnl\") pod \"dnsmasq-dns-5bcd98d69f-lmg4l\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:35:44.941327 master-0 kubenswrapper[38936]: I0216 21:35:44.939878 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:35:45.341296 master-0 kubenswrapper[38936]: I0216 21:35:45.341178 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-fjmds"] Feb 16 21:35:45.397188 master-0 kubenswrapper[38936]: I0216 21:35:45.397107 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-5fq4v"] Feb 16 21:35:45.400833 master-0 kubenswrapper[38936]: I0216 21:35:45.400771 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:35:45.413502 master-0 kubenswrapper[38936]: I0216 21:35:45.413356 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-5fq4v"] Feb 16 21:35:45.551416 master-0 kubenswrapper[38936]: I0216 21:35:45.550553 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-config\") pod \"dnsmasq-dns-6b98d7b55c-5fq4v\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:35:45.551416 master-0 kubenswrapper[38936]: I0216 21:35:45.550744 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-5fq4v\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:35:45.551416 master-0 kubenswrapper[38936]: I0216 21:35:45.550791 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fljgp\" (UniqueName: \"kubernetes.io/projected/ce0bed6a-2010-497e-bea2-c4c4d493300e-kube-api-access-fljgp\") pod \"dnsmasq-dns-6b98d7b55c-5fq4v\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:35:45.650513 master-0 kubenswrapper[38936]: I0216 21:35:45.650444 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-lmg4l"] Feb 16 21:35:45.652719 master-0 kubenswrapper[38936]: I0216 21:35:45.652623 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-5fq4v\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:35:45.652826 master-0 kubenswrapper[38936]: I0216 21:35:45.652761 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fljgp\" (UniqueName: \"kubernetes.io/projected/ce0bed6a-2010-497e-bea2-c4c4d493300e-kube-api-access-fljgp\") pod \"dnsmasq-dns-6b98d7b55c-5fq4v\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:35:45.652896 master-0 kubenswrapper[38936]: I0216 21:35:45.652829 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-config\") pod \"dnsmasq-dns-6b98d7b55c-5fq4v\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:35:45.654407 master-0 kubenswrapper[38936]: I0216 21:35:45.654351 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-5fq4v\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:35:45.675306 master-0 kubenswrapper[38936]: I0216 21:35:45.675116 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fljgp\" (UniqueName: \"kubernetes.io/projected/ce0bed6a-2010-497e-bea2-c4c4d493300e-kube-api-access-fljgp\") pod \"dnsmasq-dns-6b98d7b55c-5fq4v\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:35:45.703760 master-0 kubenswrapper[38936]: I0216 21:35:45.684561 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-config\") pod \"dnsmasq-dns-6b98d7b55c-5fq4v\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:35:45.744034 master-0 kubenswrapper[38936]: I0216 21:35:45.743976 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:35:46.256669 master-0 kubenswrapper[38936]: I0216 21:35:46.256577 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-5fq4v"] Feb 16 21:35:46.275087 master-0 kubenswrapper[38936]: W0216 21:35:46.275031 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce0bed6a_2010_497e_bea2_c4c4d493300e.slice/crio-cd3bc3b3fec64c5ee7affa6f4ffd76c8b16d3df738f0bce04a4bf3cd7bbcf239 WatchSource:0}: Error finding container cd3bc3b3fec64c5ee7affa6f4ffd76c8b16d3df738f0bce04a4bf3cd7bbcf239: Status 404 returned error can't find the container with id cd3bc3b3fec64c5ee7affa6f4ffd76c8b16d3df738f0bce04a4bf3cd7bbcf239 Feb 16 21:35:46.407420 master-0 kubenswrapper[38936]: I0216 21:35:46.407331 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" event={"ID":"76e203cf-4653-455c-beee-c382bec17645","Type":"ContainerStarted","Data":"22dd1393115867021e1a25288a997939998a7e59da2d773ff72f8c423c7be040"} Feb 16 21:35:46.417015 master-0 kubenswrapper[38936]: I0216 21:35:46.416947 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" event={"ID":"ce0bed6a-2010-497e-bea2-c4c4d493300e","Type":"ContainerStarted","Data":"cd3bc3b3fec64c5ee7affa6f4ffd76c8b16d3df738f0bce04a4bf3cd7bbcf239"} Feb 16 21:35:48.706219 master-0 kubenswrapper[38936]: I0216 21:35:48.706144 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:35:48.743660 master-0 kubenswrapper[38936]: I0216 21:35:48.743555 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.743852 master-0 kubenswrapper[38936]: I0216 21:35:48.743668 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:35:48.746464 master-0 kubenswrapper[38936]: I0216 21:35:48.746376 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 21:35:48.747159 master-0 kubenswrapper[38936]: I0216 21:35:48.747111 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 21:35:48.748291 master-0 kubenswrapper[38936]: I0216 21:35:48.748214 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 21:35:48.748531 master-0 kubenswrapper[38936]: I0216 21:35:48.748465 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 21:35:48.748531 master-0 kubenswrapper[38936]: I0216 21:35:48.748511 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 21:35:48.749882 master-0 kubenswrapper[38936]: I0216 21:35:48.749840 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 21:35:48.802619 master-0 kubenswrapper[38936]: I0216 21:35:48.802519 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a3ae6146-0a46-4058-a938-0dba04b24a1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.803195 master-0 kubenswrapper[38936]: I0216 21:35:48.803112 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.804490 master-0 kubenswrapper[38936]: I0216 21:35:48.804437 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckt72\" (UniqueName: \"kubernetes.io/projected/a3ae6146-0a46-4058-a938-0dba04b24a1f-kube-api-access-ckt72\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.804559 master-0 kubenswrapper[38936]: I0216 21:35:48.804531 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a3ae6146-0a46-4058-a938-0dba04b24a1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.804702 master-0 kubenswrapper[38936]: I0216 21:35:48.804629 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a3ae6146-0a46-4058-a938-0dba04b24a1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.804748 master-0 kubenswrapper[38936]: I0216 21:35:48.804701 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3ae6146-0a46-4058-a938-0dba04b24a1f-config-data\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.805214 master-0 kubenswrapper[38936]: I0216 21:35:48.805141 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.805280 master-0 kubenswrapper[38936]: I0216 21:35:48.805257 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a3ae6146-0a46-4058-a938-0dba04b24a1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.805324 master-0 kubenswrapper[38936]: I0216 21:35:48.805283 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.805395 master-0 kubenswrapper[38936]: I0216 21:35:48.805372 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.805584 master-0 kubenswrapper[38936]: I0216 21:35:48.805539 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5ff55d82-b8d2-4449-aa02-ffb9a843b445\" (UniqueName: \"kubernetes.io/csi/topolvm.io^14f535de-5bfb-4ad8-91ad-d969d6e2961d\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.912832 master-0 kubenswrapper[38936]: I0216 21:35:48.912538 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.912832 master-0 kubenswrapper[38936]: I0216 21:35:48.912679 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a3ae6146-0a46-4058-a938-0dba04b24a1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.912832 master-0 kubenswrapper[38936]: I0216 21:35:48.912702 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.912832 master-0 kubenswrapper[38936]: I0216 21:35:48.912836 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.913166 master-0 kubenswrapper[38936]: I0216 21:35:48.912895 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5ff55d82-b8d2-4449-aa02-ffb9a843b445\" (UniqueName: \"kubernetes.io/csi/topolvm.io^14f535de-5bfb-4ad8-91ad-d969d6e2961d\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.913210 master-0 kubenswrapper[38936]: I0216 21:35:48.913171 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a3ae6146-0a46-4058-a938-0dba04b24a1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.913698 master-0 kubenswrapper[38936]: I0216 21:35:48.913253 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.913698 master-0 kubenswrapper[38936]: I0216 21:35:48.913303 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckt72\" (UniqueName: \"kubernetes.io/projected/a3ae6146-0a46-4058-a938-0dba04b24a1f-kube-api-access-ckt72\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.913698 master-0 kubenswrapper[38936]: I0216 21:35:48.913327 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a3ae6146-0a46-4058-a938-0dba04b24a1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.913698 master-0 kubenswrapper[38936]: I0216 21:35:48.913349 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a3ae6146-0a46-4058-a938-0dba04b24a1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.913698 master-0 kubenswrapper[38936]: I0216 21:35:48.913372 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3ae6146-0a46-4058-a938-0dba04b24a1f-config-data\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.913880 master-0 kubenswrapper[38936]: I0216 21:35:48.913162 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.914533 master-0 kubenswrapper[38936]: I0216 21:35:48.914502 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.918184 master-0 kubenswrapper[38936]: I0216 21:35:48.917728 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:35:48.918184 master-0 kubenswrapper[38936]: I0216 21:35:48.917784 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5ff55d82-b8d2-4449-aa02-ffb9a843b445\" (UniqueName: \"kubernetes.io/csi/topolvm.io^14f535de-5bfb-4ad8-91ad-d969d6e2961d\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/7a0fb6018ec96133a619595fa191bb2a711d5cc82c944a7671b3c721df6df9f0/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.920331 master-0 kubenswrapper[38936]: I0216 21:35:48.920280 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a3ae6146-0a46-4058-a938-0dba04b24a1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.921537 master-0 kubenswrapper[38936]: I0216 21:35:48.921505 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a3ae6146-0a46-4058-a938-0dba04b24a1f-config-data\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.923276 master-0 kubenswrapper[38936]: I0216 21:35:48.923066 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a3ae6146-0a46-4058-a938-0dba04b24a1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.923876 master-0 kubenswrapper[38936]: I0216 21:35:48.923669 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a3ae6146-0a46-4058-a938-0dba04b24a1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.930366 master-0 kubenswrapper[38936]: I0216 21:35:48.930262 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a3ae6146-0a46-4058-a938-0dba04b24a1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.930442 master-0 kubenswrapper[38936]: I0216 21:35:48.930271 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.933045 master-0 kubenswrapper[38936]: I0216 21:35:48.933008 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckt72\" (UniqueName: \"kubernetes.io/projected/a3ae6146-0a46-4058-a938-0dba04b24a1f-kube-api-access-ckt72\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:48.933220 master-0 kubenswrapper[38936]: I0216 21:35:48.933155 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a3ae6146-0a46-4058-a938-0dba04b24a1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:49.588916 master-0 kubenswrapper[38936]: I0216 21:35:49.588819 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:35:49.593339 master-0 kubenswrapper[38936]: I0216 21:35:49.592271 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.603341 master-0 kubenswrapper[38936]: I0216 21:35:49.603264 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 21:35:49.604685 master-0 kubenswrapper[38936]: I0216 21:35:49.604594 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 21:35:49.604758 master-0 kubenswrapper[38936]: I0216 21:35:49.604694 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 21:35:49.605504 master-0 kubenswrapper[38936]: I0216 21:35:49.605478 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 21:35:49.605772 master-0 kubenswrapper[38936]: I0216 21:35:49.605742 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 21:35:49.605916 master-0 kubenswrapper[38936]: I0216 21:35:49.605873 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 21:35:49.651707 master-0 kubenswrapper[38936]: I0216 21:35:49.651198 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:35:49.661768 master-0 kubenswrapper[38936]: I0216 21:35:49.661711 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56ed148e-f9e4-4547-ad45-227bd66edcfa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.661869 master-0 kubenswrapper[38936]: I0216 21:35:49.661781 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.661869 master-0 kubenswrapper[38936]: I0216 21:35:49.661825 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfh6g\" (UniqueName: \"kubernetes.io/projected/56ed148e-f9e4-4547-ad45-227bd66edcfa-kube-api-access-rfh6g\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.661869 master-0 kubenswrapper[38936]: I0216 21:35:49.661865 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c8e922ad-32b0-415e-add6-9891075521a7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bd59aced-46eb-40a7-8366-f478eb970725\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.661968 master-0 kubenswrapper[38936]: I0216 21:35:49.661885 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/56ed148e-f9e4-4547-ad45-227bd66edcfa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.661968 master-0 kubenswrapper[38936]: I0216 21:35:49.661916 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.661968 master-0 kubenswrapper[38936]: I0216 21:35:49.661948 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/56ed148e-f9e4-4547-ad45-227bd66edcfa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.662059 master-0 kubenswrapper[38936]: I0216 21:35:49.661973 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.662059 master-0 kubenswrapper[38936]: I0216 21:35:49.661996 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/56ed148e-f9e4-4547-ad45-227bd66edcfa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.662059 master-0 kubenswrapper[38936]: I0216 21:35:49.662043 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.662152 master-0 kubenswrapper[38936]: I0216 21:35:49.662067 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/56ed148e-f9e4-4547-ad45-227bd66edcfa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.765316 master-0 kubenswrapper[38936]: I0216 21:35:49.765221 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56ed148e-f9e4-4547-ad45-227bd66edcfa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.766927 master-0 kubenswrapper[38936]: I0216 21:35:49.765346 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.766927 master-0 kubenswrapper[38936]: I0216 21:35:49.765404 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfh6g\" (UniqueName: \"kubernetes.io/projected/56ed148e-f9e4-4547-ad45-227bd66edcfa-kube-api-access-rfh6g\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.766927 master-0 kubenswrapper[38936]: I0216 21:35:49.765472 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c8e922ad-32b0-415e-add6-9891075521a7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bd59aced-46eb-40a7-8366-f478eb970725\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.766927 master-0 kubenswrapper[38936]: I0216 21:35:49.765505 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/56ed148e-f9e4-4547-ad45-227bd66edcfa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.766927 master-0 kubenswrapper[38936]: I0216 21:35:49.765544 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.766927 master-0 kubenswrapper[38936]: I0216 21:35:49.765589 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/56ed148e-f9e4-4547-ad45-227bd66edcfa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.766927 master-0 kubenswrapper[38936]: I0216 21:35:49.765629 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.766927 master-0 kubenswrapper[38936]: I0216 21:35:49.765682 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/56ed148e-f9e4-4547-ad45-227bd66edcfa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.766927 master-0 kubenswrapper[38936]: I0216 21:35:49.765811 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.766927 master-0 kubenswrapper[38936]: I0216 21:35:49.765855 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/56ed148e-f9e4-4547-ad45-227bd66edcfa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.767323 master-0 kubenswrapper[38936]: I0216 21:35:49.767266 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56ed148e-f9e4-4547-ad45-227bd66edcfa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.767735 master-0 kubenswrapper[38936]: I0216 21:35:49.767704 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.768996 master-0 kubenswrapper[38936]: I0216 21:35:49.768963 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.770740 master-0 kubenswrapper[38936]: I0216 21:35:49.770498 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/56ed148e-f9e4-4547-ad45-227bd66edcfa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.773506 master-0 kubenswrapper[38936]: I0216 21:35:49.773448 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/56ed148e-f9e4-4547-ad45-227bd66edcfa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.773816 master-0 kubenswrapper[38936]: I0216 21:35:49.773789 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 21:35:49.776698 master-0 kubenswrapper[38936]: I0216 21:35:49.775354 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 21:35:49.786566 master-0 kubenswrapper[38936]: I0216 21:35:49.785338 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 21:35:49.786566 master-0 kubenswrapper[38936]: I0216 21:35:49.786309 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 21:35:49.789116 master-0 kubenswrapper[38936]: I0216 21:35:49.789040 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.791089 master-0 kubenswrapper[38936]: I0216 21:35:49.789914 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/56ed148e-f9e4-4547-ad45-227bd66edcfa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.793459 master-0 kubenswrapper[38936]: I0216 21:35:49.793400 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 21:35:49.797578 master-0 kubenswrapper[38936]: I0216 21:35:49.797404 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 21:35:49.812860 master-0 kubenswrapper[38936]: I0216 21:35:49.812409 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:35:49.812860 master-0 kubenswrapper[38936]: I0216 21:35:49.812468 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c8e922ad-32b0-415e-add6-9891075521a7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bd59aced-46eb-40a7-8366-f478eb970725\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/afbf355c8533839dddca57099646fcba7b83fcdb6e68157f9cf7e2480e2b036b/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.829460 master-0 kubenswrapper[38936]: I0216 21:35:49.829412 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/56ed148e-f9e4-4547-ad45-227bd66edcfa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.843683 master-0 kubenswrapper[38936]: I0216 21:35:49.843527 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/56ed148e-f9e4-4547-ad45-227bd66edcfa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.851929 master-0 kubenswrapper[38936]: I0216 21:35:49.851806 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfh6g\" (UniqueName: \"kubernetes.io/projected/56ed148e-f9e4-4547-ad45-227bd66edcfa-kube-api-access-rfh6g\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:49.868215 master-0 kubenswrapper[38936]: I0216 21:35:49.868129 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/895ef7e1-683f-479c-952a-ce27497b4cf8-kolla-config\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:49.868366 master-0 kubenswrapper[38936]: I0216 21:35:49.868221 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b8kf\" (UniqueName: \"kubernetes.io/projected/895ef7e1-683f-479c-952a-ce27497b4cf8-kube-api-access-8b8kf\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:49.868366 master-0 kubenswrapper[38936]: I0216 21:35:49.868245 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/895ef7e1-683f-479c-952a-ce27497b4cf8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:49.868366 master-0 kubenswrapper[38936]: I0216 21:35:49.868302 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/895ef7e1-683f-479c-952a-ce27497b4cf8-config-data\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:49.868487 master-0 kubenswrapper[38936]: I0216 21:35:49.868390 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/895ef7e1-683f-479c-952a-ce27497b4cf8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:49.981393 master-0 kubenswrapper[38936]: I0216 21:35:49.981331 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/895ef7e1-683f-479c-952a-ce27497b4cf8-kolla-config\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:49.982309 master-0 kubenswrapper[38936]: I0216 21:35:49.981713 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b8kf\" (UniqueName: \"kubernetes.io/projected/895ef7e1-683f-479c-952a-ce27497b4cf8-kube-api-access-8b8kf\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:49.982466 master-0 kubenswrapper[38936]: I0216 21:35:49.982446 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/895ef7e1-683f-479c-952a-ce27497b4cf8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:49.983520 master-0 kubenswrapper[38936]: I0216 21:35:49.983451 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/895ef7e1-683f-479c-952a-ce27497b4cf8-config-data\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:49.983605 master-0 kubenswrapper[38936]: I0216 21:35:49.983551 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/895ef7e1-683f-479c-952a-ce27497b4cf8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:49.992759 master-0 kubenswrapper[38936]: I0216 21:35:49.988182 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 21:35:49.992759 master-0 kubenswrapper[38936]: I0216 21:35:49.988707 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 21:35:49.992759 master-0 kubenswrapper[38936]: I0216 21:35:49.989415 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/895ef7e1-683f-479c-952a-ce27497b4cf8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:50.002960 master-0 kubenswrapper[38936]: I0216 21:35:50.002896 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/895ef7e1-683f-479c-952a-ce27497b4cf8-config-data\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:50.007212 master-0 kubenswrapper[38936]: I0216 21:35:50.005390 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/895ef7e1-683f-479c-952a-ce27497b4cf8-kolla-config\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:50.025737 master-0 kubenswrapper[38936]: I0216 21:35:50.025248 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/895ef7e1-683f-479c-952a-ce27497b4cf8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:50.047272 master-0 kubenswrapper[38936]: I0216 21:35:50.044645 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b8kf\" (UniqueName: \"kubernetes.io/projected/895ef7e1-683f-479c-952a-ce27497b4cf8-kube-api-access-8b8kf\") pod \"memcached-0\" (UID: \"895ef7e1-683f-479c-952a-ce27497b4cf8\") " pod="openstack/memcached-0" Feb 16 21:35:50.326507 master-0 kubenswrapper[38936]: I0216 21:35:50.325076 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 21:35:50.530223 master-0 kubenswrapper[38936]: I0216 21:35:50.530163 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5ff55d82-b8d2-4449-aa02-ffb9a843b445\" (UniqueName: \"kubernetes.io/csi/topolvm.io^14f535de-5bfb-4ad8-91ad-d969d6e2961d\") pod \"rabbitmq-server-0\" (UID: \"a3ae6146-0a46-4058-a938-0dba04b24a1f\") " pod="openstack/rabbitmq-server-0" Feb 16 21:35:50.602628 master-0 kubenswrapper[38936]: I0216 21:35:50.602425 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 21:35:50.763249 master-0 kubenswrapper[38936]: I0216 21:35:50.763177 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:35:50.766540 master-0 kubenswrapper[38936]: I0216 21:35:50.766513 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 21:35:50.773047 master-0 kubenswrapper[38936]: I0216 21:35:50.772982 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 21:35:50.773236 master-0 kubenswrapper[38936]: I0216 21:35:50.773129 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 21:35:50.773236 master-0 kubenswrapper[38936]: I0216 21:35:50.772978 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 21:35:50.786630 master-0 kubenswrapper[38936]: I0216 21:35:50.785283 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:35:50.910870 master-0 kubenswrapper[38936]: I0216 21:35:50.910599 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5fc61990-a712-4046-925c-a18d2a0b34a5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:50.911738 master-0 kubenswrapper[38936]: I0216 21:35:50.911719 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jmgl\" (UniqueName: \"kubernetes.io/projected/5fc61990-a712-4046-925c-a18d2a0b34a5-kube-api-access-9jmgl\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:50.912181 master-0 kubenswrapper[38936]: I0216 21:35:50.912160 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fc61990-a712-4046-925c-a18d2a0b34a5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:50.912403 master-0 kubenswrapper[38936]: I0216 21:35:50.912390 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fc61990-a712-4046-925c-a18d2a0b34a5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:50.912992 master-0 kubenswrapper[38936]: I0216 21:35:50.912977 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc61990-a712-4046-925c-a18d2a0b34a5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:50.913963 master-0 kubenswrapper[38936]: I0216 21:35:50.913364 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5fc61990-a712-4046-925c-a18d2a0b34a5-kolla-config\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:50.914154 master-0 kubenswrapper[38936]: I0216 21:35:50.914137 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ad754d15-ec57-4eb9-ab6b-d10b0e15d540\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f22ebb46-861f-4ee8-a04e-bf15fa80d733\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:50.914937 master-0 kubenswrapper[38936]: I0216 21:35:50.914913 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5fc61990-a712-4046-925c-a18d2a0b34a5-config-data-default\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.018901 master-0 kubenswrapper[38936]: I0216 21:35:51.018845 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5fc61990-a712-4046-925c-a18d2a0b34a5-kolla-config\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.019359 master-0 kubenswrapper[38936]: I0216 21:35:51.019268 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ad754d15-ec57-4eb9-ab6b-d10b0e15d540\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f22ebb46-861f-4ee8-a04e-bf15fa80d733\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.031882 master-0 kubenswrapper[38936]: I0216 21:35:51.031830 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:35:51.032165 master-0 kubenswrapper[38936]: I0216 21:35:51.032069 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ad754d15-ec57-4eb9-ab6b-d10b0e15d540\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f22ebb46-861f-4ee8-a04e-bf15fa80d733\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/227d94dfe68485c88a4c0d4d2fae1688cfe64d732077059a766a3acff46eef3c/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 21:35:51.032958 master-0 kubenswrapper[38936]: I0216 21:35:51.032891 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5fc61990-a712-4046-925c-a18d2a0b34a5-config-data-default\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.033410 master-0 kubenswrapper[38936]: I0216 21:35:51.033321 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5fc61990-a712-4046-925c-a18d2a0b34a5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.033410 master-0 kubenswrapper[38936]: I0216 21:35:51.033403 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jmgl\" (UniqueName: \"kubernetes.io/projected/5fc61990-a712-4046-925c-a18d2a0b34a5-kube-api-access-9jmgl\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.034643 master-0 kubenswrapper[38936]: I0216 21:35:51.033429 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fc61990-a712-4046-925c-a18d2a0b34a5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.034643 master-0 kubenswrapper[38936]: I0216 21:35:51.033728 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fc61990-a712-4046-925c-a18d2a0b34a5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.034643 master-0 kubenswrapper[38936]: I0216 21:35:51.033884 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5fc61990-a712-4046-925c-a18d2a0b34a5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.034643 master-0 kubenswrapper[38936]: I0216 21:35:51.033993 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc61990-a712-4046-925c-a18d2a0b34a5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.040664 master-0 kubenswrapper[38936]: I0216 21:35:51.040592 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5fc61990-a712-4046-925c-a18d2a0b34a5-kolla-config\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.042445 master-0 kubenswrapper[38936]: I0216 21:35:51.042415 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5fc61990-a712-4046-925c-a18d2a0b34a5-config-data-default\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.043783 master-0 kubenswrapper[38936]: I0216 21:35:51.043734 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fc61990-a712-4046-925c-a18d2a0b34a5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.043893 master-0 kubenswrapper[38936]: I0216 21:35:51.043845 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fc61990-a712-4046-925c-a18d2a0b34a5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.044097 master-0 kubenswrapper[38936]: I0216 21:35:51.044059 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc61990-a712-4046-925c-a18d2a0b34a5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:51.063556 master-0 kubenswrapper[38936]: I0216 21:35:51.063466 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jmgl\" (UniqueName: \"kubernetes.io/projected/5fc61990-a712-4046-925c-a18d2a0b34a5-kube-api-access-9jmgl\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:52.045266 master-0 kubenswrapper[38936]: I0216 21:35:52.045186 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c8e922ad-32b0-415e-add6-9891075521a7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^bd59aced-46eb-40a7-8366-f478eb970725\") pod \"rabbitmq-cell1-server-0\" (UID: \"56ed148e-f9e4-4547-ad45-227bd66edcfa\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:52.051882 master-0 kubenswrapper[38936]: I0216 21:35:52.051794 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:35:52.274819 master-0 kubenswrapper[38936]: I0216 21:35:52.274748 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:35:52.280125 master-0 kubenswrapper[38936]: I0216 21:35:52.280082 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.284199 master-0 kubenswrapper[38936]: I0216 21:35:52.284050 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 21:35:52.284291 master-0 kubenswrapper[38936]: I0216 21:35:52.284237 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 21:35:52.285773 master-0 kubenswrapper[38936]: I0216 21:35:52.285707 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 21:35:52.313304 master-0 kubenswrapper[38936]: I0216 21:35:52.311171 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:35:52.418440 master-0 kubenswrapper[38936]: I0216 21:35:52.418375 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a4a58c1-a2c8-40fd-9fb4-c4f0d2fc283c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b02b6471-f92d-4f6c-830a-eb036723ff0e\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.418440 master-0 kubenswrapper[38936]: I0216 21:35:52.418440 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb77fc89-6e9b-438d-8fb7-c87367a747b0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.420136 master-0 kubenswrapper[38936]: I0216 21:35:52.418510 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ngqb\" (UniqueName: \"kubernetes.io/projected/eb77fc89-6e9b-438d-8fb7-c87367a747b0-kube-api-access-4ngqb\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.420136 master-0 kubenswrapper[38936]: I0216 21:35:52.418707 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb77fc89-6e9b-438d-8fb7-c87367a747b0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.420136 master-0 kubenswrapper[38936]: I0216 21:35:52.418858 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb77fc89-6e9b-438d-8fb7-c87367a747b0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.420136 master-0 kubenswrapper[38936]: I0216 21:35:52.419207 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb77fc89-6e9b-438d-8fb7-c87367a747b0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.420136 master-0 kubenswrapper[38936]: I0216 21:35:52.419261 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb77fc89-6e9b-438d-8fb7-c87367a747b0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.420136 master-0 kubenswrapper[38936]: I0216 21:35:52.419436 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb77fc89-6e9b-438d-8fb7-c87367a747b0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.527757 master-0 kubenswrapper[38936]: I0216 21:35:52.527626 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb77fc89-6e9b-438d-8fb7-c87367a747b0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.527757 master-0 kubenswrapper[38936]: I0216 21:35:52.527738 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb77fc89-6e9b-438d-8fb7-c87367a747b0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.528019 master-0 kubenswrapper[38936]: I0216 21:35:52.527800 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb77fc89-6e9b-438d-8fb7-c87367a747b0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.528098 master-0 kubenswrapper[38936]: I0216 21:35:52.527823 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb77fc89-6e9b-438d-8fb7-c87367a747b0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.528433 master-0 kubenswrapper[38936]: I0216 21:35:52.528413 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb77fc89-6e9b-438d-8fb7-c87367a747b0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.529052 master-0 kubenswrapper[38936]: I0216 21:35:52.528931 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a4a58c1-a2c8-40fd-9fb4-c4f0d2fc283c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b02b6471-f92d-4f6c-830a-eb036723ff0e\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.529052 master-0 kubenswrapper[38936]: I0216 21:35:52.529049 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb77fc89-6e9b-438d-8fb7-c87367a747b0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.529471 master-0 kubenswrapper[38936]: I0216 21:35:52.529195 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ngqb\" (UniqueName: \"kubernetes.io/projected/eb77fc89-6e9b-438d-8fb7-c87367a747b0-kube-api-access-4ngqb\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.529471 master-0 kubenswrapper[38936]: I0216 21:35:52.529267 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb77fc89-6e9b-438d-8fb7-c87367a747b0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.531340 master-0 kubenswrapper[38936]: I0216 21:35:52.530559 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb77fc89-6e9b-438d-8fb7-c87367a747b0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.531340 master-0 kubenswrapper[38936]: I0216 21:35:52.530641 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb77fc89-6e9b-438d-8fb7-c87367a747b0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.531340 master-0 kubenswrapper[38936]: I0216 21:35:52.530989 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb77fc89-6e9b-438d-8fb7-c87367a747b0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.534496 master-0 kubenswrapper[38936]: I0216 21:35:52.534105 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:35:52.534496 master-0 kubenswrapper[38936]: I0216 21:35:52.534141 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a4a58c1-a2c8-40fd-9fb4-c4f0d2fc283c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b02b6471-f92d-4f6c-830a-eb036723ff0e\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/2bc3375a769f53fce3735a7a8f6f97db2091ff369baeed8bd689963e0aceabb1/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.543964 master-0 kubenswrapper[38936]: I0216 21:35:52.538538 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb77fc89-6e9b-438d-8fb7-c87367a747b0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.548043 master-0 kubenswrapper[38936]: I0216 21:35:52.547934 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb77fc89-6e9b-438d-8fb7-c87367a747b0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:52.552320 master-0 kubenswrapper[38936]: I0216 21:35:52.551912 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ngqb\" (UniqueName: \"kubernetes.io/projected/eb77fc89-6e9b-438d-8fb7-c87367a747b0-kube-api-access-4ngqb\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:53.073609 master-0 kubenswrapper[38936]: I0216 21:35:53.073539 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ad754d15-ec57-4eb9-ab6b-d10b0e15d540\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f22ebb46-861f-4ee8-a04e-bf15fa80d733\") pod \"openstack-galera-0\" (UID: \"5fc61990-a712-4046-925c-a18d2a0b34a5\") " pod="openstack/openstack-galera-0" Feb 16 21:35:53.225862 master-0 kubenswrapper[38936]: I0216 21:35:53.225783 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 21:35:54.090641 master-0 kubenswrapper[38936]: I0216 21:35:54.090559 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a4a58c1-a2c8-40fd-9fb4-c4f0d2fc283c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b02b6471-f92d-4f6c-830a-eb036723ff0e\") pod \"openstack-cell1-galera-0\" (UID: \"eb77fc89-6e9b-438d-8fb7-c87367a747b0\") " pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:54.127874 master-0 kubenswrapper[38936]: I0216 21:35:54.127794 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 21:35:55.781355 master-0 kubenswrapper[38936]: I0216 21:35:55.780005 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zr5cs"] Feb 16 21:35:55.782344 master-0 kubenswrapper[38936]: I0216 21:35:55.782178 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.786462 master-0 kubenswrapper[38936]: I0216 21:35:55.786387 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 21:35:55.786789 master-0 kubenswrapper[38936]: I0216 21:35:55.786672 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 21:35:55.805017 master-0 kubenswrapper[38936]: I0216 21:35:55.804935 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-lhsv6"] Feb 16 21:35:55.808278 master-0 kubenswrapper[38936]: I0216 21:35:55.808224 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.830178 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-var-run\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.830235 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-var-log\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.830265 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/018762db-2c9f-40c4-b05a-52df963c4376-ovn-controller-tls-certs\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.830517 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/018762db-2c9f-40c4-b05a-52df963c4376-var-run\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.830668 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-var-lib\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.831066 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/018762db-2c9f-40c4-b05a-52df963c4376-var-run-ovn\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.831129 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6789d08-fc97-4c56-a8a4-82c131474c22-scripts\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.831203 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018762db-2c9f-40c4-b05a-52df963c4376-combined-ca-bundle\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.831267 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018762db-2c9f-40c4-b05a-52df963c4376-scripts\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.831484 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6lp\" (UniqueName: \"kubernetes.io/projected/d6789d08-fc97-4c56-a8a4-82c131474c22-kube-api-access-8h6lp\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.831700 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/018762db-2c9f-40c4-b05a-52df963c4376-var-log-ovn\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.831763 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-etc-ovs\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.831819 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnccd\" (UniqueName: \"kubernetes.io/projected/018762db-2c9f-40c4-b05a-52df963c4376-kube-api-access-bnccd\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.845605 master-0 kubenswrapper[38936]: I0216 21:35:55.844122 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zr5cs"] Feb 16 21:35:55.917334 master-0 kubenswrapper[38936]: I0216 21:35:55.914365 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lhsv6"] Feb 16 21:35:55.939076 master-0 kubenswrapper[38936]: I0216 21:35:55.939018 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/018762db-2c9f-40c4-b05a-52df963c4376-var-run-ovn\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.939076 master-0 kubenswrapper[38936]: I0216 21:35:55.939047 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/018762db-2c9f-40c4-b05a-52df963c4376-var-run-ovn\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.939391 master-0 kubenswrapper[38936]: I0216 21:35:55.939112 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6789d08-fc97-4c56-a8a4-82c131474c22-scripts\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.939391 master-0 kubenswrapper[38936]: I0216 21:35:55.939144 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018762db-2c9f-40c4-b05a-52df963c4376-combined-ca-bundle\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.939391 master-0 kubenswrapper[38936]: I0216 21:35:55.939175 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018762db-2c9f-40c4-b05a-52df963c4376-scripts\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.939391 master-0 kubenswrapper[38936]: I0216 21:35:55.939198 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h6lp\" (UniqueName: \"kubernetes.io/projected/d6789d08-fc97-4c56-a8a4-82c131474c22-kube-api-access-8h6lp\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.939861 master-0 kubenswrapper[38936]: I0216 21:35:55.939838 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/018762db-2c9f-40c4-b05a-52df963c4376-var-log-ovn\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.939938 master-0 kubenswrapper[38936]: I0216 21:35:55.939904 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-etc-ovs\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.939938 master-0 kubenswrapper[38936]: I0216 21:35:55.939932 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnccd\" (UniqueName: \"kubernetes.io/projected/018762db-2c9f-40c4-b05a-52df963c4376-kube-api-access-bnccd\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.940369 master-0 kubenswrapper[38936]: I0216 21:35:55.940330 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/018762db-2c9f-40c4-b05a-52df963c4376-var-log-ovn\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.940597 master-0 kubenswrapper[38936]: I0216 21:35:55.940574 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-var-run\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.940664 master-0 kubenswrapper[38936]: I0216 21:35:55.940571 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-etc-ovs\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.940734 master-0 kubenswrapper[38936]: I0216 21:35:55.940618 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-var-log\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.940828 master-0 kubenswrapper[38936]: I0216 21:35:55.940806 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-var-log\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.940884 master-0 kubenswrapper[38936]: I0216 21:35:55.940818 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-var-run\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.940935 master-0 kubenswrapper[38936]: I0216 21:35:55.940833 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/018762db-2c9f-40c4-b05a-52df963c4376-ovn-controller-tls-certs\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.941114 master-0 kubenswrapper[38936]: I0216 21:35:55.941080 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/018762db-2c9f-40c4-b05a-52df963c4376-var-run\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.941220 master-0 kubenswrapper[38936]: I0216 21:35:55.941166 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-var-lib\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.941637 master-0 kubenswrapper[38936]: I0216 21:35:55.941359 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/018762db-2c9f-40c4-b05a-52df963c4376-scripts\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.941637 master-0 kubenswrapper[38936]: I0216 21:35:55.941582 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d6789d08-fc97-4c56-a8a4-82c131474c22-var-lib\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.941637 master-0 kubenswrapper[38936]: I0216 21:35:55.941617 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/018762db-2c9f-40c4-b05a-52df963c4376-var-run\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.943033 master-0 kubenswrapper[38936]: I0216 21:35:55.942471 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6789d08-fc97-4c56-a8a4-82c131474c22-scripts\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:55.943831 master-0 kubenswrapper[38936]: I0216 21:35:55.943781 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018762db-2c9f-40c4-b05a-52df963c4376-combined-ca-bundle\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.948608 master-0 kubenswrapper[38936]: I0216 21:35:55.948566 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/018762db-2c9f-40c4-b05a-52df963c4376-ovn-controller-tls-certs\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.957172 master-0 kubenswrapper[38936]: I0216 21:35:55.957131 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnccd\" (UniqueName: \"kubernetes.io/projected/018762db-2c9f-40c4-b05a-52df963c4376-kube-api-access-bnccd\") pod \"ovn-controller-zr5cs\" (UID: \"018762db-2c9f-40c4-b05a-52df963c4376\") " pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:55.957369 master-0 kubenswrapper[38936]: I0216 21:35:55.957313 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h6lp\" (UniqueName: \"kubernetes.io/projected/d6789d08-fc97-4c56-a8a4-82c131474c22-kube-api-access-8h6lp\") pod \"ovn-controller-ovs-lhsv6\" (UID: \"d6789d08-fc97-4c56-a8a4-82c131474c22\") " pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:56.138844 master-0 kubenswrapper[38936]: I0216 21:35:56.134408 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zr5cs" Feb 16 21:35:56.158845 master-0 kubenswrapper[38936]: I0216 21:35:56.158751 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:35:56.906690 master-0 kubenswrapper[38936]: I0216 21:35:56.906316 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:35:56.909864 master-0 kubenswrapper[38936]: I0216 21:35:56.908718 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:56.913456 master-0 kubenswrapper[38936]: I0216 21:35:56.912804 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 21:35:56.913684 master-0 kubenswrapper[38936]: I0216 21:35:56.913634 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 21:35:56.916571 master-0 kubenswrapper[38936]: I0216 21:35:56.916521 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 21:35:56.916826 master-0 kubenswrapper[38936]: I0216 21:35:56.916802 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 21:35:56.918920 master-0 kubenswrapper[38936]: I0216 21:35:56.918876 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:35:56.965636 master-0 kubenswrapper[38936]: I0216 21:35:56.965561 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-880dfc00-53c3-4211-93a9-12a81d6ea938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ff5906-72aa-4095-8827-76685e3de9b0\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:56.965636 master-0 kubenswrapper[38936]: I0216 21:35:56.965621 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:56.966113 master-0 kubenswrapper[38936]: I0216 21:35:56.965952 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:56.966206 master-0 kubenswrapper[38936]: I0216 21:35:56.966149 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:56.966298 master-0 kubenswrapper[38936]: I0216 21:35:56.966267 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4td76\" (UniqueName: \"kubernetes.io/projected/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-kube-api-access-4td76\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:56.966338 master-0 kubenswrapper[38936]: I0216 21:35:56.966319 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:56.966454 master-0 kubenswrapper[38936]: I0216 21:35:56.966429 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:56.966492 master-0 kubenswrapper[38936]: I0216 21:35:56.966457 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.069930 master-0 kubenswrapper[38936]: I0216 21:35:57.069860 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.070304 master-0 kubenswrapper[38936]: I0216 21:35:57.070048 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.070304 master-0 kubenswrapper[38936]: I0216 21:35:57.070107 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-880dfc00-53c3-4211-93a9-12a81d6ea938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ff5906-72aa-4095-8827-76685e3de9b0\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.070304 master-0 kubenswrapper[38936]: I0216 21:35:57.070135 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.070304 master-0 kubenswrapper[38936]: I0216 21:35:57.070189 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.070304 master-0 kubenswrapper[38936]: I0216 21:35:57.070250 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.070304 master-0 kubenswrapper[38936]: I0216 21:35:57.070283 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4td76\" (UniqueName: \"kubernetes.io/projected/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-kube-api-access-4td76\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.070531 master-0 kubenswrapper[38936]: I0216 21:35:57.070324 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.070845 master-0 kubenswrapper[38936]: I0216 21:35:57.070803 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-config\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.071152 master-0 kubenswrapper[38936]: I0216 21:35:57.071127 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.073489 master-0 kubenswrapper[38936]: I0216 21:35:57.073453 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:35:57.073560 master-0 kubenswrapper[38936]: I0216 21:35:57.073488 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-880dfc00-53c3-4211-93a9-12a81d6ea938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ff5906-72aa-4095-8827-76685e3de9b0\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/05b3d678c3eb9d3b754aaeedf4020af18bbfdcb756792d1e49fa3e5abcf7efa5/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.075888 master-0 kubenswrapper[38936]: I0216 21:35:57.075226 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.075888 master-0 kubenswrapper[38936]: I0216 21:35:57.075455 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.075888 master-0 kubenswrapper[38936]: I0216 21:35:57.075508 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.075888 master-0 kubenswrapper[38936]: I0216 21:35:57.075560 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:57.090761 master-0 kubenswrapper[38936]: I0216 21:35:57.090569 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4td76\" (UniqueName: \"kubernetes.io/projected/5b7082bc-9e00-4676-a7bf-4b6d03d132f9-kube-api-access-4td76\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:58.732074 master-0 kubenswrapper[38936]: I0216 21:35:58.731981 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-880dfc00-53c3-4211-93a9-12a81d6ea938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^02ff5906-72aa-4095-8827-76685e3de9b0\") pod \"ovsdbserver-nb-0\" (UID: \"5b7082bc-9e00-4676-a7bf-4b6d03d132f9\") " pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:58.748089 master-0 kubenswrapper[38936]: I0216 21:35:58.747756 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 21:35:59.682061 master-0 kubenswrapper[38936]: I0216 21:35:59.681976 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:35:59.684155 master-0 kubenswrapper[38936]: I0216 21:35:59.684107 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 21:35:59.687688 master-0 kubenswrapper[38936]: I0216 21:35:59.687622 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 21:35:59.687888 master-0 kubenswrapper[38936]: I0216 21:35:59.687806 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 21:35:59.689256 master-0 kubenswrapper[38936]: I0216 21:35:59.688011 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 21:35:59.834151 master-0 kubenswrapper[38936]: I0216 21:35:59.834074 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:36:00.048450 master-0 kubenswrapper[38936]: I0216 21:36:00.048296 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/412a78ab-d40f-4548-b8db-4eb4462fb5e9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.048450 master-0 kubenswrapper[38936]: I0216 21:36:00.048405 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/412a78ab-d40f-4548-b8db-4eb4462fb5e9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.048450 master-0 kubenswrapper[38936]: I0216 21:36:00.048462 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/412a78ab-d40f-4548-b8db-4eb4462fb5e9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.049027 master-0 kubenswrapper[38936]: I0216 21:36:00.048516 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d332b892-bd00-45c7-90c5-52b7bdfe0152\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15d57321-9603-4556-800e-0c531e810b74\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.049027 master-0 kubenswrapper[38936]: I0216 21:36:00.048637 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/412a78ab-d40f-4548-b8db-4eb4462fb5e9-config\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.049027 master-0 kubenswrapper[38936]: I0216 21:36:00.048872 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/412a78ab-d40f-4548-b8db-4eb4462fb5e9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.049257 master-0 kubenswrapper[38936]: I0216 21:36:00.049212 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccwd5\" (UniqueName: \"kubernetes.io/projected/412a78ab-d40f-4548-b8db-4eb4462fb5e9-kube-api-access-ccwd5\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.049337 master-0 kubenswrapper[38936]: I0216 21:36:00.049256 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/412a78ab-d40f-4548-b8db-4eb4462fb5e9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.151440 master-0 kubenswrapper[38936]: I0216 21:36:00.151363 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/412a78ab-d40f-4548-b8db-4eb4462fb5e9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.151440 master-0 kubenswrapper[38936]: I0216 21:36:00.151441 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/412a78ab-d40f-4548-b8db-4eb4462fb5e9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.151787 master-0 kubenswrapper[38936]: I0216 21:36:00.151470 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/412a78ab-d40f-4548-b8db-4eb4462fb5e9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.151787 master-0 kubenswrapper[38936]: I0216 21:36:00.151502 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d332b892-bd00-45c7-90c5-52b7bdfe0152\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15d57321-9603-4556-800e-0c531e810b74\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.151787 master-0 kubenswrapper[38936]: I0216 21:36:00.151539 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/412a78ab-d40f-4548-b8db-4eb4462fb5e9-config\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.151787 master-0 kubenswrapper[38936]: I0216 21:36:00.151578 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/412a78ab-d40f-4548-b8db-4eb4462fb5e9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.151787 master-0 kubenswrapper[38936]: I0216 21:36:00.151660 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccwd5\" (UniqueName: \"kubernetes.io/projected/412a78ab-d40f-4548-b8db-4eb4462fb5e9-kube-api-access-ccwd5\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.151787 master-0 kubenswrapper[38936]: I0216 21:36:00.151678 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/412a78ab-d40f-4548-b8db-4eb4462fb5e9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.152834 master-0 kubenswrapper[38936]: I0216 21:36:00.152790 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/412a78ab-d40f-4548-b8db-4eb4462fb5e9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.154397 master-0 kubenswrapper[38936]: I0216 21:36:00.153796 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/412a78ab-d40f-4548-b8db-4eb4462fb5e9-config\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.154699 master-0 kubenswrapper[38936]: I0216 21:36:00.154591 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/412a78ab-d40f-4548-b8db-4eb4462fb5e9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.155417 master-0 kubenswrapper[38936]: I0216 21:36:00.155370 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:36:00.155490 master-0 kubenswrapper[38936]: I0216 21:36:00.155438 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d332b892-bd00-45c7-90c5-52b7bdfe0152\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15d57321-9603-4556-800e-0c531e810b74\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/3029dd55e45d9f8d2ddf4f772d8d7444e81fad94d5cc5bd11c5627079a9af62d/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.155745 master-0 kubenswrapper[38936]: I0216 21:36:00.155703 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/412a78ab-d40f-4548-b8db-4eb4462fb5e9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.156448 master-0 kubenswrapper[38936]: I0216 21:36:00.156413 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/412a78ab-d40f-4548-b8db-4eb4462fb5e9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.174925 master-0 kubenswrapper[38936]: I0216 21:36:00.174866 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/412a78ab-d40f-4548-b8db-4eb4462fb5e9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:00.370257 master-0 kubenswrapper[38936]: I0216 21:36:00.368492 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccwd5\" (UniqueName: \"kubernetes.io/projected/412a78ab-d40f-4548-b8db-4eb4462fb5e9-kube-api-access-ccwd5\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:01.781314 master-0 kubenswrapper[38936]: I0216 21:36:01.781254 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d332b892-bd00-45c7-90c5-52b7bdfe0152\" (UniqueName: \"kubernetes.io/csi/topolvm.io^15d57321-9603-4556-800e-0c531e810b74\") pod \"ovsdbserver-sb-0\" (UID: \"412a78ab-d40f-4548-b8db-4eb4462fb5e9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:01.803974 master-0 kubenswrapper[38936]: I0216 21:36:01.803897 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:04.241293 master-0 kubenswrapper[38936]: I0216 21:36:04.241135 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 21:36:04.601167 master-0 kubenswrapper[38936]: W0216 21:36:04.601103 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb77fc89_6e9b_438d_8fb7_c87367a747b0.slice/crio-577caadf460f06678aee37aa0c93cd14c97f7de97c521be04eaa76ff1b84fc14 WatchSource:0}: Error finding container 577caadf460f06678aee37aa0c93cd14c97f7de97c521be04eaa76ff1b84fc14: Status 404 returned error can't find the container with id 577caadf460f06678aee37aa0c93cd14c97f7de97c521be04eaa76ff1b84fc14 Feb 16 21:36:04.728289 master-0 kubenswrapper[38936]: I0216 21:36:04.728234 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eb77fc89-6e9b-438d-8fb7-c87367a747b0","Type":"ContainerStarted","Data":"577caadf460f06678aee37aa0c93cd14c97f7de97c521be04eaa76ff1b84fc14"} Feb 16 21:36:05.558058 master-0 kubenswrapper[38936]: I0216 21:36:05.557669 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zr5cs"] Feb 16 21:36:05.587895 master-0 kubenswrapper[38936]: W0216 21:36:05.587849 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3ae6146_0a46_4058_a938_0dba04b24a1f.slice/crio-0d59e3a2b9ffce2b49d8182c736872a7a428988f2188637a4c1919a964c50447 WatchSource:0}: Error finding container 0d59e3a2b9ffce2b49d8182c736872a7a428988f2188637a4c1919a964c50447: Status 404 returned error can't find the container with id 0d59e3a2b9ffce2b49d8182c736872a7a428988f2188637a4c1919a964c50447 Feb 16 21:36:05.612748 master-0 kubenswrapper[38936]: I0216 21:36:05.590833 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 21:36:05.612748 master-0 kubenswrapper[38936]: I0216 21:36:05.603014 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 21:36:05.747670 master-0 kubenswrapper[38936]: I0216 21:36:05.747580 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zr5cs" event={"ID":"018762db-2c9f-40c4-b05a-52df963c4376","Type":"ContainerStarted","Data":"a85f0ac00172ae5e74c0c686c958dbfeaa5b1d85cbfc748472f1ad4db96323d3"} Feb 16 21:36:05.749859 master-0 kubenswrapper[38936]: I0216 21:36:05.749815 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a3ae6146-0a46-4058-a938-0dba04b24a1f","Type":"ContainerStarted","Data":"0d59e3a2b9ffce2b49d8182c736872a7a428988f2188637a4c1919a964c50447"} Feb 16 21:36:05.751939 master-0 kubenswrapper[38936]: I0216 21:36:05.751898 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5fc61990-a712-4046-925c-a18d2a0b34a5","Type":"ContainerStarted","Data":"d53550fb05ee6b71ebdf5161e912b4c85f48452911338f9f802df9f15f12c3ad"} Feb 16 21:36:05.753475 master-0 kubenswrapper[38936]: I0216 21:36:05.753435 38936 generic.go:334] "Generic (PLEG): container finished" podID="ce0bed6a-2010-497e-bea2-c4c4d493300e" containerID="4c03d45899d0ef8ac972adac04b08007f8928be406bc46ed1b285acbf94b9328" exitCode=0 Feb 16 21:36:05.753535 master-0 kubenswrapper[38936]: I0216 21:36:05.753487 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" event={"ID":"ce0bed6a-2010-497e-bea2-c4c4d493300e","Type":"ContainerDied","Data":"4c03d45899d0ef8ac972adac04b08007f8928be406bc46ed1b285acbf94b9328"} Feb 16 21:36:05.797774 master-0 kubenswrapper[38936]: I0216 21:36:05.791844 38936 generic.go:334] "Generic (PLEG): container finished" podID="76e203cf-4653-455c-beee-c382bec17645" containerID="7df5f80ef13396018312ad843a096ee64ff72cf937a3b5c532ab76a05de6639d" exitCode=0 Feb 16 21:36:05.797774 master-0 kubenswrapper[38936]: I0216 21:36:05.791930 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" event={"ID":"76e203cf-4653-455c-beee-c382bec17645","Type":"ContainerDied","Data":"7df5f80ef13396018312ad843a096ee64ff72cf937a3b5c532ab76a05de6639d"} Feb 16 21:36:05.800850 master-0 kubenswrapper[38936]: I0216 21:36:05.800534 38936 generic.go:334] "Generic (PLEG): container finished" podID="99c6bec1-e16d-433a-bb6c-ccad436d357f" containerID="71f495c73b14429fb905b3db69967f1d73b7be485e2bbe21c139c7ff147624c7" exitCode=0 Feb 16 21:36:05.800850 master-0 kubenswrapper[38936]: I0216 21:36:05.800715 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-fjmds" event={"ID":"99c6bec1-e16d-433a-bb6c-ccad436d357f","Type":"ContainerDied","Data":"71f495c73b14429fb905b3db69967f1d73b7be485e2bbe21c139c7ff147624c7"} Feb 16 21:36:05.808755 master-0 kubenswrapper[38936]: I0216 21:36:05.808358 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lhsv6"] Feb 16 21:36:05.814877 master-0 kubenswrapper[38936]: W0216 21:36:05.814828 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6789d08_fc97_4c56_a8a4_82c131474c22.slice/crio-f57ee65d04418ee91bc514d82a4d6c9f90610637f428ffe3038a8a4e8b1b8e57 WatchSource:0}: Error finding container f57ee65d04418ee91bc514d82a4d6c9f90610637f428ffe3038a8a4e8b1b8e57: Status 404 returned error can't find the container with id f57ee65d04418ee91bc514d82a4d6c9f90610637f428ffe3038a8a4e8b1b8e57 Feb 16 21:36:05.816622 master-0 kubenswrapper[38936]: I0216 21:36:05.816536 38936 generic.go:334] "Generic (PLEG): container finished" podID="29d3e957-9451-4feb-a578-4409217df9f1" containerID="6aa68367833725a47153bfc0d259d010ee227a719fd5d5a56a116798ae9e3bd7" exitCode=0 Feb 16 21:36:05.816622 master-0 kubenswrapper[38936]: I0216 21:36:05.816584 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" event={"ID":"29d3e957-9451-4feb-a578-4409217df9f1","Type":"ContainerDied","Data":"6aa68367833725a47153bfc0d259d010ee227a719fd5d5a56a116798ae9e3bd7"} Feb 16 21:36:06.109287 master-0 kubenswrapper[38936]: I0216 21:36:06.109223 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 21:36:06.130090 master-0 kubenswrapper[38936]: I0216 21:36:06.130027 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 21:36:06.153587 master-0 kubenswrapper[38936]: W0216 21:36:06.153476 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ed148e_f9e4_4547_ad45_227bd66edcfa.slice/crio-4629a398bb462ebb36ae81aad10d1c211e56725da16ee535fb9cc9f142d0cde5 WatchSource:0}: Error finding container 4629a398bb462ebb36ae81aad10d1c211e56725da16ee535fb9cc9f142d0cde5: Status 404 returned error can't find the container with id 4629a398bb462ebb36ae81aad10d1c211e56725da16ee535fb9cc9f142d0cde5 Feb 16 21:36:06.210251 master-0 kubenswrapper[38936]: I0216 21:36:06.210091 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 21:36:06.221039 master-0 kubenswrapper[38936]: E0216 21:36:06.220969 38936 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 16 21:36:06.221039 master-0 kubenswrapper[38936]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/76e203cf-4653-455c-beee-c382bec17645/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 16 21:36:06.221039 master-0 kubenswrapper[38936]: > podSandboxID="22dd1393115867021e1a25288a997939998a7e59da2d773ff72f8c423c7be040" Feb 16 21:36:06.221533 master-0 kubenswrapper[38936]: E0216 21:36:06.221232 38936 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 16 21:36:06.221533 master-0 kubenswrapper[38936]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbchf8h696h5ffh5cdh585hc5hbfh597h58dhfh554h67bh9bh5c9hfch7dh5fbhbbh567h78h669hf8h65dh55dh588h5ddh88h694h669h95h8q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkqnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5bcd98d69f-lmg4l_openstack(76e203cf-4653-455c-beee-c382bec17645): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/76e203cf-4653-455c-beee-c382bec17645/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 16 21:36:06.221533 master-0 kubenswrapper[38936]: > logger="UnhandledError" Feb 16 21:36:06.222821 master-0 kubenswrapper[38936]: E0216 21:36:06.222687 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/76e203cf-4653-455c-beee-c382bec17645/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" podUID="76e203cf-4653-455c-beee-c382bec17645" Feb 16 21:36:06.488800 master-0 kubenswrapper[38936]: I0216 21:36:06.488695 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:36:06.526899 master-0 kubenswrapper[38936]: I0216 21:36:06.526803 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-config\") pod \"99c6bec1-e16d-433a-bb6c-ccad436d357f\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " Feb 16 21:36:06.527236 master-0 kubenswrapper[38936]: I0216 21:36:06.527043 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-dns-svc\") pod \"99c6bec1-e16d-433a-bb6c-ccad436d357f\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " Feb 16 21:36:06.527275 master-0 kubenswrapper[38936]: I0216 21:36:06.527242 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk92l\" (UniqueName: \"kubernetes.io/projected/99c6bec1-e16d-433a-bb6c-ccad436d357f-kube-api-access-hk92l\") pod \"99c6bec1-e16d-433a-bb6c-ccad436d357f\" (UID: \"99c6bec1-e16d-433a-bb6c-ccad436d357f\") " Feb 16 21:36:06.550673 master-0 kubenswrapper[38936]: I0216 21:36:06.548633 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c6bec1-e16d-433a-bb6c-ccad436d357f-kube-api-access-hk92l" (OuterVolumeSpecName: "kube-api-access-hk92l") pod "99c6bec1-e16d-433a-bb6c-ccad436d357f" (UID: "99c6bec1-e16d-433a-bb6c-ccad436d357f"). InnerVolumeSpecName "kube-api-access-hk92l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:06.586460 master-0 kubenswrapper[38936]: I0216 21:36:06.586134 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" Feb 16 21:36:06.590783 master-0 kubenswrapper[38936]: I0216 21:36:06.587918 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-config" (OuterVolumeSpecName: "config") pod "99c6bec1-e16d-433a-bb6c-ccad436d357f" (UID: "99c6bec1-e16d-433a-bb6c-ccad436d357f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:06.615603 master-0 kubenswrapper[38936]: I0216 21:36:06.615540 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99c6bec1-e16d-433a-bb6c-ccad436d357f" (UID: "99c6bec1-e16d-433a-bb6c-ccad436d357f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:06.630280 master-0 kubenswrapper[38936]: I0216 21:36:06.630189 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d3e957-9451-4feb-a578-4409217df9f1-config\") pod \"29d3e957-9451-4feb-a578-4409217df9f1\" (UID: \"29d3e957-9451-4feb-a578-4409217df9f1\") " Feb 16 21:36:06.630861 master-0 kubenswrapper[38936]: I0216 21:36:06.630832 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6ngc\" (UniqueName: \"kubernetes.io/projected/29d3e957-9451-4feb-a578-4409217df9f1-kube-api-access-j6ngc\") pod \"29d3e957-9451-4feb-a578-4409217df9f1\" (UID: \"29d3e957-9451-4feb-a578-4409217df9f1\") " Feb 16 21:36:06.631465 master-0 kubenswrapper[38936]: I0216 21:36:06.631430 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:06.631465 master-0 kubenswrapper[38936]: I0216 21:36:06.631461 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk92l\" (UniqueName: \"kubernetes.io/projected/99c6bec1-e16d-433a-bb6c-ccad436d357f-kube-api-access-hk92l\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:06.631582 master-0 kubenswrapper[38936]: I0216 21:36:06.631477 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99c6bec1-e16d-433a-bb6c-ccad436d357f-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:06.636448 master-0 kubenswrapper[38936]: I0216 21:36:06.636415 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d3e957-9451-4feb-a578-4409217df9f1-kube-api-access-j6ngc" (OuterVolumeSpecName: "kube-api-access-j6ngc") pod "29d3e957-9451-4feb-a578-4409217df9f1" (UID: "29d3e957-9451-4feb-a578-4409217df9f1"). InnerVolumeSpecName "kube-api-access-j6ngc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:06.659617 master-0 kubenswrapper[38936]: I0216 21:36:06.659542 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29d3e957-9451-4feb-a578-4409217df9f1-config" (OuterVolumeSpecName: "config") pod "29d3e957-9451-4feb-a578-4409217df9f1" (UID: "29d3e957-9451-4feb-a578-4409217df9f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:06.735021 master-0 kubenswrapper[38936]: I0216 21:36:06.734908 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6ngc\" (UniqueName: \"kubernetes.io/projected/29d3e957-9451-4feb-a578-4409217df9f1-kube-api-access-j6ngc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:06.735021 master-0 kubenswrapper[38936]: I0216 21:36:06.734960 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29d3e957-9451-4feb-a578-4409217df9f1-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:06.809672 master-0 kubenswrapper[38936]: W0216 21:36:06.808887 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod412a78ab_d40f_4548_b8db_4eb4462fb5e9.slice/crio-235d07a8afb6b1f6ffdea2efac8a4aca4b1ab91b49d1e244e4073bf899365393 WatchSource:0}: Error finding container 235d07a8afb6b1f6ffdea2efac8a4aca4b1ab91b49d1e244e4073bf899365393: Status 404 returned error can't find the container with id 235d07a8afb6b1f6ffdea2efac8a4aca4b1ab91b49d1e244e4073bf899365393 Feb 16 21:36:06.814744 master-0 kubenswrapper[38936]: I0216 21:36:06.812309 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 21:36:06.832275 master-0 kubenswrapper[38936]: I0216 21:36:06.832211 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"56ed148e-f9e4-4547-ad45-227bd66edcfa","Type":"ContainerStarted","Data":"4629a398bb462ebb36ae81aad10d1c211e56725da16ee535fb9cc9f142d0cde5"} Feb 16 21:36:06.836056 master-0 kubenswrapper[38936]: I0216 21:36:06.835985 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" event={"ID":"ce0bed6a-2010-497e-bea2-c4c4d493300e","Type":"ContainerStarted","Data":"8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c"} Feb 16 21:36:06.836139 master-0 kubenswrapper[38936]: I0216 21:36:06.836072 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:36:06.838216 master-0 kubenswrapper[38936]: I0216 21:36:06.838161 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-fjmds" event={"ID":"99c6bec1-e16d-433a-bb6c-ccad436d357f","Type":"ContainerDied","Data":"61291f296fdc6c4c7a6c70757e3820ea694493213c42dca617041ae0c09af9d8"} Feb 16 21:36:06.838312 master-0 kubenswrapper[38936]: I0216 21:36:06.838228 38936 scope.go:117] "RemoveContainer" containerID="71f495c73b14429fb905b3db69967f1d73b7be485e2bbe21c139c7ff147624c7" Feb 16 21:36:06.838409 master-0 kubenswrapper[38936]: I0216 21:36:06.838365 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-fjmds" Feb 16 21:36:06.844170 master-0 kubenswrapper[38936]: I0216 21:36:06.844102 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lhsv6" event={"ID":"d6789d08-fc97-4c56-a8a4-82c131474c22","Type":"ContainerStarted","Data":"f57ee65d04418ee91bc514d82a4d6c9f90610637f428ffe3038a8a4e8b1b8e57"} Feb 16 21:36:06.847609 master-0 kubenswrapper[38936]: I0216 21:36:06.847553 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" Feb 16 21:36:06.847819 master-0 kubenswrapper[38936]: I0216 21:36:06.847506 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-ml4rt" event={"ID":"29d3e957-9451-4feb-a578-4409217df9f1","Type":"ContainerDied","Data":"671ba4fdf675bd1558b4b4294ad5e133ff51ac2ff49b04d148370833321af302"} Feb 16 21:36:06.849267 master-0 kubenswrapper[38936]: I0216 21:36:06.849213 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"412a78ab-d40f-4548-b8db-4eb4462fb5e9","Type":"ContainerStarted","Data":"235d07a8afb6b1f6ffdea2efac8a4aca4b1ab91b49d1e244e4073bf899365393"} Feb 16 21:36:06.851157 master-0 kubenswrapper[38936]: I0216 21:36:06.851096 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"895ef7e1-683f-479c-952a-ce27497b4cf8","Type":"ContainerStarted","Data":"89425bd736c4e50c45f67c6f68f79940e8deb9fc0fcf9b33ae10f83698a019f8"} Feb 16 21:36:06.853308 master-0 kubenswrapper[38936]: I0216 21:36:06.853263 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b7082bc-9e00-4676-a7bf-4b6d03d132f9","Type":"ContainerStarted","Data":"1206f45be50fbd8daad7ac12a2c8ade0f32cf7bcc258df49f8a3ecf4328298f1"} Feb 16 21:36:06.884401 master-0 kubenswrapper[38936]: I0216 21:36:06.884277 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" podStartSLOduration=3.398441105 podStartE2EDuration="21.884233911s" podCreationTimestamp="2026-02-16 21:35:45 +0000 UTC" firstStartedPulling="2026-02-16 21:35:46.280084357 +0000 UTC m=+776.632087709" lastFinishedPulling="2026-02-16 21:36:04.765877153 +0000 UTC m=+795.117880515" observedRunningTime="2026-02-16 21:36:06.870527304 +0000 UTC m=+797.222530666" watchObservedRunningTime="2026-02-16 21:36:06.884233911 +0000 UTC m=+797.236237273" Feb 16 21:36:06.907688 master-0 kubenswrapper[38936]: I0216 21:36:06.904870 38936 scope.go:117] "RemoveContainer" containerID="6aa68367833725a47153bfc0d259d010ee227a719fd5d5a56a116798ae9e3bd7" Feb 16 21:36:07.098751 master-0 kubenswrapper[38936]: I0216 21:36:07.088425 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-fjmds"] Feb 16 21:36:07.112308 master-0 kubenswrapper[38936]: I0216 21:36:07.112247 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-fjmds"] Feb 16 21:36:07.180398 master-0 kubenswrapper[38936]: I0216 21:36:07.180311 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-ml4rt"] Feb 16 21:36:07.199769 master-0 kubenswrapper[38936]: I0216 21:36:07.199677 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-ml4rt"] Feb 16 21:36:07.873144 master-0 kubenswrapper[38936]: I0216 21:36:07.873072 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" event={"ID":"76e203cf-4653-455c-beee-c382bec17645","Type":"ContainerStarted","Data":"3b3fb42f93f6c58ba549d8c7861f07459083ad738a464001def9ee2a705a2d15"} Feb 16 21:36:07.895475 master-0 kubenswrapper[38936]: I0216 21:36:07.895421 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d3e957-9451-4feb-a578-4409217df9f1" path="/var/lib/kubelet/pods/29d3e957-9451-4feb-a578-4409217df9f1/volumes" Feb 16 21:36:07.896076 master-0 kubenswrapper[38936]: I0216 21:36:07.896052 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99c6bec1-e16d-433a-bb6c-ccad436d357f" path="/var/lib/kubelet/pods/99c6bec1-e16d-433a-bb6c-ccad436d357f/volumes" Feb 16 21:36:07.896683 master-0 kubenswrapper[38936]: I0216 21:36:07.896662 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:36:07.913616 master-0 kubenswrapper[38936]: I0216 21:36:07.913457 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" podStartSLOduration=4.691401459 podStartE2EDuration="23.913425697s" podCreationTimestamp="2026-02-16 21:35:44 +0000 UTC" firstStartedPulling="2026-02-16 21:35:45.643987314 +0000 UTC m=+775.995990676" lastFinishedPulling="2026-02-16 21:36:04.866011552 +0000 UTC m=+795.218014914" observedRunningTime="2026-02-16 21:36:07.900308002 +0000 UTC m=+798.252311365" watchObservedRunningTime="2026-02-16 21:36:07.913425697 +0000 UTC m=+798.265429059" Feb 16 21:36:13.956277 master-0 kubenswrapper[38936]: I0216 21:36:13.956173 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zr5cs" event={"ID":"018762db-2c9f-40c4-b05a-52df963c4376","Type":"ContainerStarted","Data":"ed49814cd8a3c2c384d3d1cb8ba23a1df26502751f10a71310248efc9fdc7ab1"} Feb 16 21:36:13.957007 master-0 kubenswrapper[38936]: I0216 21:36:13.956829 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zr5cs" Feb 16 21:36:13.960116 master-0 kubenswrapper[38936]: I0216 21:36:13.960069 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"895ef7e1-683f-479c-952a-ce27497b4cf8","Type":"ContainerStarted","Data":"62106261e8c1d42d995ad910bdcc5505775c211e7fc259454a03007edc05f7f8"} Feb 16 21:36:13.960264 master-0 kubenswrapper[38936]: I0216 21:36:13.960234 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 21:36:13.964030 master-0 kubenswrapper[38936]: I0216 21:36:13.963757 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b7082bc-9e00-4676-a7bf-4b6d03d132f9","Type":"ContainerStarted","Data":"2eceae3c3aedbee495ad3a2568140f748ba9d8710fccc793cd9a53096b0efb58"} Feb 16 21:36:13.969820 master-0 kubenswrapper[38936]: I0216 21:36:13.969770 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5fc61990-a712-4046-925c-a18d2a0b34a5","Type":"ContainerStarted","Data":"05458d3427507c37cc14f2b67293711889efda02f4850b57d54bb83a49422462"} Feb 16 21:36:13.973157 master-0 kubenswrapper[38936]: I0216 21:36:13.973105 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lhsv6" event={"ID":"d6789d08-fc97-4c56-a8a4-82c131474c22","Type":"ContainerStarted","Data":"21ac4209bad5ee3af62e8ff87a6a287ed6cd3cc421e814f1e5593538933f30af"} Feb 16 21:36:14.000680 master-0 kubenswrapper[38936]: I0216 21:36:13.996040 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"412a78ab-d40f-4548-b8db-4eb4462fb5e9","Type":"ContainerStarted","Data":"61f3f23ff0f0d20b7c4aa408944bfb8085b644e0c015502c84c12337dcde9f9a"} Feb 16 21:36:14.000680 master-0 kubenswrapper[38936]: I0216 21:36:14.000553 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zr5cs" podStartSLOduration=11.431214985 podStartE2EDuration="19.000538224s" podCreationTimestamp="2026-02-16 21:35:55 +0000 UTC" firstStartedPulling="2026-02-16 21:36:05.573998359 +0000 UTC m=+795.926001721" lastFinishedPulling="2026-02-16 21:36:13.143321598 +0000 UTC m=+803.495324960" observedRunningTime="2026-02-16 21:36:13.983797292 +0000 UTC m=+804.335800654" watchObservedRunningTime="2026-02-16 21:36:14.000538224 +0000 UTC m=+804.352541586" Feb 16 21:36:14.012677 master-0 kubenswrapper[38936]: I0216 21:36:14.001275 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eb77fc89-6e9b-438d-8fb7-c87367a747b0","Type":"ContainerStarted","Data":"cb1bd87a3726e0106dd6dac1b17d42b5fbb0d7ee7b620c878132a64d938b3e0c"} Feb 16 21:36:14.053674 master-0 kubenswrapper[38936]: I0216 21:36:14.053099 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.159773979 podStartE2EDuration="25.053067363s" podCreationTimestamp="2026-02-16 21:35:49 +0000 UTC" firstStartedPulling="2026-02-16 21:36:06.170960619 +0000 UTC m=+796.522963981" lastFinishedPulling="2026-02-16 21:36:13.064254003 +0000 UTC m=+803.416257365" observedRunningTime="2026-02-16 21:36:14.047247796 +0000 UTC m=+804.399251188" watchObservedRunningTime="2026-02-16 21:36:14.053067363 +0000 UTC m=+804.405070725" Feb 16 21:36:14.941968 master-0 kubenswrapper[38936]: I0216 21:36:14.941905 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:36:15.027748 master-0 kubenswrapper[38936]: I0216 21:36:15.026966 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"56ed148e-f9e4-4547-ad45-227bd66edcfa","Type":"ContainerStarted","Data":"7e1442e9078f9e4108bfa18733871e5acbe1612df9b3ecc81a6faeab79b3d453"} Feb 16 21:36:15.058484 master-0 kubenswrapper[38936]: I0216 21:36:15.058270 38936 generic.go:334] "Generic (PLEG): container finished" podID="d6789d08-fc97-4c56-a8a4-82c131474c22" containerID="21ac4209bad5ee3af62e8ff87a6a287ed6cd3cc421e814f1e5593538933f30af" exitCode=0 Feb 16 21:36:15.059101 master-0 kubenswrapper[38936]: I0216 21:36:15.059059 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lhsv6" event={"ID":"d6789d08-fc97-4c56-a8a4-82c131474c22","Type":"ContainerDied","Data":"21ac4209bad5ee3af62e8ff87a6a287ed6cd3cc421e814f1e5593538933f30af"} Feb 16 21:36:15.748063 master-0 kubenswrapper[38936]: I0216 21:36:15.747988 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:36:15.860677 master-0 kubenswrapper[38936]: I0216 21:36:15.859126 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-lmg4l"] Feb 16 21:36:15.860677 master-0 kubenswrapper[38936]: I0216 21:36:15.859405 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" podUID="76e203cf-4653-455c-beee-c382bec17645" containerName="dnsmasq-dns" containerID="cri-o://3b3fb42f93f6c58ba549d8c7861f07459083ad738a464001def9ee2a705a2d15" gracePeriod=10 Feb 16 21:36:16.105185 master-0 kubenswrapper[38936]: I0216 21:36:16.105116 38936 generic.go:334] "Generic (PLEG): container finished" podID="76e203cf-4653-455c-beee-c382bec17645" containerID="3b3fb42f93f6c58ba549d8c7861f07459083ad738a464001def9ee2a705a2d15" exitCode=0 Feb 16 21:36:16.105810 master-0 kubenswrapper[38936]: I0216 21:36:16.105196 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" event={"ID":"76e203cf-4653-455c-beee-c382bec17645","Type":"ContainerDied","Data":"3b3fb42f93f6c58ba549d8c7861f07459083ad738a464001def9ee2a705a2d15"} Feb 16 21:36:16.116001 master-0 kubenswrapper[38936]: I0216 21:36:16.115728 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lhsv6" event={"ID":"d6789d08-fc97-4c56-a8a4-82c131474c22","Type":"ContainerStarted","Data":"0d4b947e16d6237e95a3cc095e181f326ce5db9689d8e4449d6d5a53cd46e267"} Feb 16 21:36:16.116333 master-0 kubenswrapper[38936]: I0216 21:36:16.116291 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:36:16.116495 master-0 kubenswrapper[38936]: I0216 21:36:16.116419 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:36:16.131797 master-0 kubenswrapper[38936]: I0216 21:36:16.124323 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"412a78ab-d40f-4548-b8db-4eb4462fb5e9","Type":"ContainerStarted","Data":"81fff5fb7e818af25481c4d30dbab9920e3b4fc788e6eea2c4401c78b2128d29"} Feb 16 21:36:16.137772 master-0 kubenswrapper[38936]: I0216 21:36:16.137689 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5b7082bc-9e00-4676-a7bf-4b6d03d132f9","Type":"ContainerStarted","Data":"ff721baeaf7d29b0ec5e046575171ba7654efacff4b564421156553c5d8acec4"} Feb 16 21:36:16.152260 master-0 kubenswrapper[38936]: I0216 21:36:16.152162 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a3ae6146-0a46-4058-a938-0dba04b24a1f","Type":"ContainerStarted","Data":"d5dba880ca436c9fc01181a57a32c8da41545714a30b853a4a038e810b4c4686"} Feb 16 21:36:16.189080 master-0 kubenswrapper[38936]: I0216 21:36:16.188944 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-lhsv6" podStartSLOduration=13.863607448 podStartE2EDuration="21.188916537s" podCreationTimestamp="2026-02-16 21:35:55 +0000 UTC" firstStartedPulling="2026-02-16 21:36:05.817911216 +0000 UTC m=+796.169914578" lastFinishedPulling="2026-02-16 21:36:13.143220305 +0000 UTC m=+803.495223667" observedRunningTime="2026-02-16 21:36:16.142496773 +0000 UTC m=+806.494500135" watchObservedRunningTime="2026-02-16 21:36:16.188916537 +0000 UTC m=+806.540919899" Feb 16 21:36:16.218855 master-0 kubenswrapper[38936]: I0216 21:36:16.218772 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.637897269 podStartE2EDuration="19.218754043s" podCreationTimestamp="2026-02-16 21:35:57 +0000 UTC" firstStartedPulling="2026-02-16 21:36:06.817113508 +0000 UTC m=+797.169116870" lastFinishedPulling="2026-02-16 21:36:15.397970282 +0000 UTC m=+805.749973644" observedRunningTime="2026-02-16 21:36:16.213359877 +0000 UTC m=+806.565363239" watchObservedRunningTime="2026-02-16 21:36:16.218754043 +0000 UTC m=+806.570757405" Feb 16 21:36:16.222882 master-0 kubenswrapper[38936]: I0216 21:36:16.222567 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=13.111778977 podStartE2EDuration="22.222554826s" podCreationTimestamp="2026-02-16 21:35:54 +0000 UTC" firstStartedPulling="2026-02-16 21:36:06.29410663 +0000 UTC m=+796.646109992" lastFinishedPulling="2026-02-16 21:36:15.404882479 +0000 UTC m=+805.756885841" observedRunningTime="2026-02-16 21:36:16.192864303 +0000 UTC m=+806.544867665" watchObservedRunningTime="2026-02-16 21:36:16.222554826 +0000 UTC m=+806.574558188" Feb 16 21:36:16.574631 master-0 kubenswrapper[38936]: I0216 21:36:16.574569 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:36:16.704621 master-0 kubenswrapper[38936]: I0216 21:36:16.704472 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkqnl\" (UniqueName: \"kubernetes.io/projected/76e203cf-4653-455c-beee-c382bec17645-kube-api-access-tkqnl\") pod \"76e203cf-4653-455c-beee-c382bec17645\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " Feb 16 21:36:16.704841 master-0 kubenswrapper[38936]: I0216 21:36:16.704756 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-dns-svc\") pod \"76e203cf-4653-455c-beee-c382bec17645\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " Feb 16 21:36:16.704892 master-0 kubenswrapper[38936]: I0216 21:36:16.704852 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-config\") pod \"76e203cf-4653-455c-beee-c382bec17645\" (UID: \"76e203cf-4653-455c-beee-c382bec17645\") " Feb 16 21:36:16.707669 master-0 kubenswrapper[38936]: I0216 21:36:16.707593 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e203cf-4653-455c-beee-c382bec17645-kube-api-access-tkqnl" (OuterVolumeSpecName: "kube-api-access-tkqnl") pod "76e203cf-4653-455c-beee-c382bec17645" (UID: "76e203cf-4653-455c-beee-c382bec17645"). InnerVolumeSpecName "kube-api-access-tkqnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:16.748017 master-0 kubenswrapper[38936]: I0216 21:36:16.747931 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 21:36:16.751809 master-0 kubenswrapper[38936]: I0216 21:36:16.751760 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-config" (OuterVolumeSpecName: "config") pod "76e203cf-4653-455c-beee-c382bec17645" (UID: "76e203cf-4653-455c-beee-c382bec17645"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:16.754867 master-0 kubenswrapper[38936]: I0216 21:36:16.754781 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76e203cf-4653-455c-beee-c382bec17645" (UID: "76e203cf-4653-455c-beee-c382bec17645"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:16.787735 master-0 kubenswrapper[38936]: I0216 21:36:16.787681 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 21:36:16.805480 master-0 kubenswrapper[38936]: I0216 21:36:16.805407 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:16.805480 master-0 kubenswrapper[38936]: I0216 21:36:16.805474 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:16.806999 master-0 kubenswrapper[38936]: I0216 21:36:16.806912 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:16.806999 master-0 kubenswrapper[38936]: I0216 21:36:16.806963 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e203cf-4653-455c-beee-c382bec17645-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:16.806999 master-0 kubenswrapper[38936]: I0216 21:36:16.806983 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkqnl\" (UniqueName: \"kubernetes.io/projected/76e203cf-4653-455c-beee-c382bec17645-kube-api-access-tkqnl\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:16.859055 master-0 kubenswrapper[38936]: I0216 21:36:16.859012 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:17.161977 master-0 kubenswrapper[38936]: I0216 21:36:17.161891 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" event={"ID":"76e203cf-4653-455c-beee-c382bec17645","Type":"ContainerDied","Data":"22dd1393115867021e1a25288a997939998a7e59da2d773ff72f8c423c7be040"} Feb 16 21:36:17.161977 master-0 kubenswrapper[38936]: I0216 21:36:17.161960 38936 scope.go:117] "RemoveContainer" containerID="3b3fb42f93f6c58ba549d8c7861f07459083ad738a464001def9ee2a705a2d15" Feb 16 21:36:17.162633 master-0 kubenswrapper[38936]: I0216 21:36:17.161907 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-lmg4l" Feb 16 21:36:17.165320 master-0 kubenswrapper[38936]: I0216 21:36:17.165293 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lhsv6" event={"ID":"d6789d08-fc97-4c56-a8a4-82c131474c22","Type":"ContainerStarted","Data":"c76675a81131ef9fedbc50352be2bd5c6641c77ebcee35b548a8f89d79287f95"} Feb 16 21:36:17.165434 master-0 kubenswrapper[38936]: I0216 21:36:17.165417 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 21:36:17.181979 master-0 kubenswrapper[38936]: I0216 21:36:17.181939 38936 scope.go:117] "RemoveContainer" containerID="7df5f80ef13396018312ad843a096ee64ff72cf937a3b5c532ab76a05de6639d" Feb 16 21:36:17.221036 master-0 kubenswrapper[38936]: I0216 21:36:17.218388 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-lmg4l"] Feb 16 21:36:17.231005 master-0 kubenswrapper[38936]: I0216 21:36:17.230942 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-lmg4l"] Feb 16 21:36:17.894991 master-0 kubenswrapper[38936]: I0216 21:36:17.894924 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e203cf-4653-455c-beee-c382bec17645" path="/var/lib/kubelet/pods/76e203cf-4653-455c-beee-c382bec17645/volumes" Feb 16 21:36:18.226268 master-0 kubenswrapper[38936]: I0216 21:36:18.226209 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 21:36:18.227744 master-0 kubenswrapper[38936]: I0216 21:36:18.227710 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: I0216 21:36:18.560050 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-8bjc6"] Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: E0216 21:36:18.560534 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3e957-9451-4feb-a578-4409217df9f1" containerName="init" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: I0216 21:36:18.560549 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3e957-9451-4feb-a578-4409217df9f1" containerName="init" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: E0216 21:36:18.560569 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e203cf-4653-455c-beee-c382bec17645" containerName="init" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: I0216 21:36:18.560575 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e203cf-4653-455c-beee-c382bec17645" containerName="init" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: E0216 21:36:18.560618 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c6bec1-e16d-433a-bb6c-ccad436d357f" containerName="init" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: I0216 21:36:18.560625 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c6bec1-e16d-433a-bb6c-ccad436d357f" containerName="init" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: E0216 21:36:18.560635 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e203cf-4653-455c-beee-c382bec17645" containerName="dnsmasq-dns" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: I0216 21:36:18.560643 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e203cf-4653-455c-beee-c382bec17645" containerName="dnsmasq-dns" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: I0216 21:36:18.560861 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e203cf-4653-455c-beee-c382bec17645" containerName="dnsmasq-dns" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: I0216 21:36:18.560884 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c6bec1-e16d-433a-bb6c-ccad436d357f" containerName="init" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: I0216 21:36:18.560905 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d3e957-9451-4feb-a578-4409217df9f1" containerName="init" Feb 16 21:36:18.562008 master-0 kubenswrapper[38936]: I0216 21:36:18.561941 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.568565 master-0 kubenswrapper[38936]: I0216 21:36:18.568494 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 21:36:18.594677 master-0 kubenswrapper[38936]: I0216 21:36:18.586588 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-8bjc6"] Feb 16 21:36:18.664164 master-0 kubenswrapper[38936]: I0216 21:36:18.664111 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.664397 master-0 kubenswrapper[38936]: I0216 21:36:18.664210 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.664397 master-0 kubenswrapper[38936]: I0216 21:36:18.664296 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-config\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.664397 master-0 kubenswrapper[38936]: I0216 21:36:18.664325 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z2cb\" (UniqueName: \"kubernetes.io/projected/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-kube-api-access-6z2cb\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.708198 master-0 kubenswrapper[38936]: I0216 21:36:18.706284 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-nhtlw"] Feb 16 21:36:18.708198 master-0 kubenswrapper[38936]: I0216 21:36:18.707775 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.718591 master-0 kubenswrapper[38936]: I0216 21:36:18.718549 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 21:36:18.758834 master-0 kubenswrapper[38936]: I0216 21:36:18.758767 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nhtlw"] Feb 16 21:36:18.766182 master-0 kubenswrapper[38936]: I0216 21:36:18.766136 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.766279 master-0 kubenswrapper[38936]: I0216 21:36:18.766243 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.766393 master-0 kubenswrapper[38936]: I0216 21:36:18.766355 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-config\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.766428 master-0 kubenswrapper[38936]: I0216 21:36:18.766392 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-ovs-rundir\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.766428 master-0 kubenswrapper[38936]: I0216 21:36:18.766423 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z2cb\" (UniqueName: \"kubernetes.io/projected/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-kube-api-access-6z2cb\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.766516 master-0 kubenswrapper[38936]: I0216 21:36:18.766453 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.766516 master-0 kubenswrapper[38936]: I0216 21:36:18.766502 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-combined-ca-bundle\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.766602 master-0 kubenswrapper[38936]: I0216 21:36:18.766530 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-ovn-rundir\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.766602 master-0 kubenswrapper[38936]: I0216 21:36:18.766569 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p9z9\" (UniqueName: \"kubernetes.io/projected/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-kube-api-access-7p9z9\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.766602 master-0 kubenswrapper[38936]: I0216 21:36:18.766594 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-config\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.767284 master-0 kubenswrapper[38936]: I0216 21:36:18.767242 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.767930 master-0 kubenswrapper[38936]: I0216 21:36:18.767896 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.768820 master-0 kubenswrapper[38936]: I0216 21:36:18.768786 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-config\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.794530 master-0 kubenswrapper[38936]: I0216 21:36:18.794471 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z2cb\" (UniqueName: \"kubernetes.io/projected/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-kube-api-access-6z2cb\") pod \"dnsmasq-dns-7c8cfc46bf-8bjc6\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.814171 master-0 kubenswrapper[38936]: I0216 21:36:18.814050 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:36:18.823870 master-0 kubenswrapper[38936]: I0216 21:36:18.822167 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 21:36:18.830710 master-0 kubenswrapper[38936]: I0216 21:36:18.828967 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 21:36:18.830710 master-0 kubenswrapper[38936]: I0216 21:36:18.829359 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 21:36:18.830710 master-0 kubenswrapper[38936]: I0216 21:36:18.829432 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 21:36:18.836326 master-0 kubenswrapper[38936]: I0216 21:36:18.834811 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:36:18.864234 master-0 kubenswrapper[38936]: I0216 21:36:18.864184 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-8bjc6"] Feb 16 21:36:18.867424 master-0 kubenswrapper[38936]: I0216 21:36:18.865200 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:18.874858 master-0 kubenswrapper[38936]: I0216 21:36:18.874810 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-ovs-rundir\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.875038 master-0 kubenswrapper[38936]: I0216 21:36:18.875011 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.875107 master-0 kubenswrapper[38936]: I0216 21:36:18.875043 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-ovs-rundir\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.875162 master-0 kubenswrapper[38936]: I0216 21:36:18.875085 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-combined-ca-bundle\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.875237 master-0 kubenswrapper[38936]: I0216 21:36:18.875201 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-ovn-rundir\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.875383 master-0 kubenswrapper[38936]: I0216 21:36:18.875340 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-ovn-rundir\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.875383 master-0 kubenswrapper[38936]: I0216 21:36:18.875370 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9z9\" (UniqueName: \"kubernetes.io/projected/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-kube-api-access-7p9z9\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.875480 master-0 kubenswrapper[38936]: I0216 21:36:18.875463 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-config\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.876734 master-0 kubenswrapper[38936]: I0216 21:36:18.876521 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-config\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.878874 master-0 kubenswrapper[38936]: I0216 21:36:18.878775 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-combined-ca-bundle\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.880678 master-0 kubenswrapper[38936]: I0216 21:36:18.879852 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.897592 master-0 kubenswrapper[38936]: I0216 21:36:18.891245 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-7fnhx"] Feb 16 21:36:18.897592 master-0 kubenswrapper[38936]: I0216 21:36:18.893068 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:18.897592 master-0 kubenswrapper[38936]: I0216 21:36:18.895866 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 21:36:18.912321 master-0 kubenswrapper[38936]: I0216 21:36:18.912269 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p9z9\" (UniqueName: \"kubernetes.io/projected/d3534c37-e4ab-4a9b-b6e2-578dbd28bfab-kube-api-access-7p9z9\") pod \"ovn-controller-metrics-nhtlw\" (UID: \"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab\") " pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:18.913281 master-0 kubenswrapper[38936]: I0216 21:36:18.913234 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-7fnhx"] Feb 16 21:36:18.979062 master-0 kubenswrapper[38936]: I0216 21:36:18.978421 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/127e35ba-a64d-4e41-b803-54c9e9bd526d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:18.979535 master-0 kubenswrapper[38936]: I0216 21:36:18.979383 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-config\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:18.979945 master-0 kubenswrapper[38936]: I0216 21:36:18.979916 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127e35ba-a64d-4e41-b803-54c9e9bd526d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:18.980486 master-0 kubenswrapper[38936]: I0216 21:36:18.980454 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6h6v\" (UniqueName: \"kubernetes.io/projected/4efd2c15-caba-4f5a-96ba-db5845549510-kube-api-access-s6h6v\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:18.980586 master-0 kubenswrapper[38936]: I0216 21:36:18.980563 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/127e35ba-a64d-4e41-b803-54c9e9bd526d-config\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:18.980630 master-0 kubenswrapper[38936]: I0216 21:36:18.980610 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvknb\" (UniqueName: \"kubernetes.io/projected/127e35ba-a64d-4e41-b803-54c9e9bd526d-kube-api-access-mvknb\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:18.980702 master-0 kubenswrapper[38936]: I0216 21:36:18.980690 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:18.980742 master-0 kubenswrapper[38936]: I0216 21:36:18.980710 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/127e35ba-a64d-4e41-b803-54c9e9bd526d-scripts\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:18.980786 master-0 kubenswrapper[38936]: I0216 21:36:18.980747 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/127e35ba-a64d-4e41-b803-54c9e9bd526d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:18.980820 master-0 kubenswrapper[38936]: I0216 21:36:18.980792 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:18.980858 master-0 kubenswrapper[38936]: I0216 21:36:18.980823 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:18.981512 master-0 kubenswrapper[38936]: I0216 21:36:18.981466 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/127e35ba-a64d-4e41-b803-54c9e9bd526d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.039030 master-0 kubenswrapper[38936]: I0216 21:36:19.038975 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nhtlw" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.084529 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-config\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.084672 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127e35ba-a64d-4e41-b803-54c9e9bd526d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.084729 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6h6v\" (UniqueName: \"kubernetes.io/projected/4efd2c15-caba-4f5a-96ba-db5845549510-kube-api-access-s6h6v\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.084803 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/127e35ba-a64d-4e41-b803-54c9e9bd526d-config\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.084844 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvknb\" (UniqueName: \"kubernetes.io/projected/127e35ba-a64d-4e41-b803-54c9e9bd526d-kube-api-access-mvknb\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.084892 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.084917 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/127e35ba-a64d-4e41-b803-54c9e9bd526d-scripts\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.084940 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/127e35ba-a64d-4e41-b803-54c9e9bd526d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.084979 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.085013 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.085122 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/127e35ba-a64d-4e41-b803-54c9e9bd526d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.085610 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/127e35ba-a64d-4e41-b803-54c9e9bd526d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.085731 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-config\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.086557 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.087490 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/127e35ba-a64d-4e41-b803-54c9e9bd526d-config\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.088323 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.088762 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:19.089424 master-0 kubenswrapper[38936]: I0216 21:36:19.089069 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/127e35ba-a64d-4e41-b803-54c9e9bd526d-scripts\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.090772 master-0 kubenswrapper[38936]: I0216 21:36:19.089548 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/127e35ba-a64d-4e41-b803-54c9e9bd526d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.090772 master-0 kubenswrapper[38936]: I0216 21:36:19.089991 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/127e35ba-a64d-4e41-b803-54c9e9bd526d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.093622 master-0 kubenswrapper[38936]: I0216 21:36:19.093285 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127e35ba-a64d-4e41-b803-54c9e9bd526d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.103940 master-0 kubenswrapper[38936]: I0216 21:36:19.103724 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/127e35ba-a64d-4e41-b803-54c9e9bd526d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.109591 master-0 kubenswrapper[38936]: I0216 21:36:19.109466 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvknb\" (UniqueName: \"kubernetes.io/projected/127e35ba-a64d-4e41-b803-54c9e9bd526d-kube-api-access-mvknb\") pod \"ovn-northd-0\" (UID: \"127e35ba-a64d-4e41-b803-54c9e9bd526d\") " pod="openstack/ovn-northd-0" Feb 16 21:36:19.109591 master-0 kubenswrapper[38936]: I0216 21:36:19.109529 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6h6v\" (UniqueName: \"kubernetes.io/projected/4efd2c15-caba-4f5a-96ba-db5845549510-kube-api-access-s6h6v\") pod \"dnsmasq-dns-7b9694dd79-7fnhx\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:19.178714 master-0 kubenswrapper[38936]: I0216 21:36:19.176317 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 21:36:19.193951 master-0 kubenswrapper[38936]: I0216 21:36:19.193537 38936 generic.go:334] "Generic (PLEG): container finished" podID="5fc61990-a712-4046-925c-a18d2a0b34a5" containerID="05458d3427507c37cc14f2b67293711889efda02f4850b57d54bb83a49422462" exitCode=0 Feb 16 21:36:19.193951 master-0 kubenswrapper[38936]: I0216 21:36:19.193616 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5fc61990-a712-4046-925c-a18d2a0b34a5","Type":"ContainerDied","Data":"05458d3427507c37cc14f2b67293711889efda02f4850b57d54bb83a49422462"} Feb 16 21:36:19.198711 master-0 kubenswrapper[38936]: I0216 21:36:19.198664 38936 generic.go:334] "Generic (PLEG): container finished" podID="eb77fc89-6e9b-438d-8fb7-c87367a747b0" containerID="cb1bd87a3726e0106dd6dac1b17d42b5fbb0d7ee7b620c878132a64d938b3e0c" exitCode=0 Feb 16 21:36:19.198842 master-0 kubenswrapper[38936]: I0216 21:36:19.198781 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eb77fc89-6e9b-438d-8fb7-c87367a747b0","Type":"ContainerDied","Data":"cb1bd87a3726e0106dd6dac1b17d42b5fbb0d7ee7b620c878132a64d938b3e0c"} Feb 16 21:36:19.282924 master-0 kubenswrapper[38936]: I0216 21:36:19.282442 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:19.655739 master-0 kubenswrapper[38936]: I0216 21:36:19.651182 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-8bjc6"] Feb 16 21:36:19.659334 master-0 kubenswrapper[38936]: W0216 21:36:19.658968 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cdd3b3b_d20c_4250_9c02_1a2e3ad11a7c.slice/crio-43e5bc48264752e8bd41cb27e90be01ac639c6b78785900aa4605ee53975cbf2 WatchSource:0}: Error finding container 43e5bc48264752e8bd41cb27e90be01ac639c6b78785900aa4605ee53975cbf2: Status 404 returned error can't find the container with id 43e5bc48264752e8bd41cb27e90be01ac639c6b78785900aa4605ee53975cbf2 Feb 16 21:36:19.668320 master-0 kubenswrapper[38936]: W0216 21:36:19.665124 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3534c37_e4ab_4a9b_b6e2_578dbd28bfab.slice/crio-166df95a7bff0e1f7fdf3a049279a2cc05526b98d102eef5a1c5e9c2d7725d33 WatchSource:0}: Error finding container 166df95a7bff0e1f7fdf3a049279a2cc05526b98d102eef5a1c5e9c2d7725d33: Status 404 returned error can't find the container with id 166df95a7bff0e1f7fdf3a049279a2cc05526b98d102eef5a1c5e9c2d7725d33 Feb 16 21:36:19.686466 master-0 kubenswrapper[38936]: I0216 21:36:19.686005 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nhtlw"] Feb 16 21:36:19.774319 master-0 kubenswrapper[38936]: I0216 21:36:19.774224 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 21:36:19.776898 master-0 kubenswrapper[38936]: W0216 21:36:19.776849 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127e35ba_a64d_4e41_b803_54c9e9bd526d.slice/crio-cec244bb3c68650b268f32aef56dcdb901faa68f50c60255c47a8f0ab1b1e666 WatchSource:0}: Error finding container cec244bb3c68650b268f32aef56dcdb901faa68f50c60255c47a8f0ab1b1e666: Status 404 returned error can't find the container with id cec244bb3c68650b268f32aef56dcdb901faa68f50c60255c47a8f0ab1b1e666 Feb 16 21:36:19.850546 master-0 kubenswrapper[38936]: I0216 21:36:19.849396 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-7fnhx"] Feb 16 21:36:20.210892 master-0 kubenswrapper[38936]: I0216 21:36:20.210746 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5fc61990-a712-4046-925c-a18d2a0b34a5","Type":"ContainerStarted","Data":"5debf9529bd5eb3701cbdeb748261c5262b7f7d2bf74e1fd66a9e5fa76181b10"} Feb 16 21:36:20.214733 master-0 kubenswrapper[38936]: I0216 21:36:20.214694 38936 generic.go:334] "Generic (PLEG): container finished" podID="9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c" containerID="6d0f4b4f0304f24302603125e6c72fbf08eb9e33f83fe66aac2a1d0e8c1ea2fd" exitCode=0 Feb 16 21:36:20.215109 master-0 kubenswrapper[38936]: I0216 21:36:20.214789 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" event={"ID":"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c","Type":"ContainerDied","Data":"6d0f4b4f0304f24302603125e6c72fbf08eb9e33f83fe66aac2a1d0e8c1ea2fd"} Feb 16 21:36:20.215172 master-0 kubenswrapper[38936]: I0216 21:36:20.215145 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" event={"ID":"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c","Type":"ContainerStarted","Data":"43e5bc48264752e8bd41cb27e90be01ac639c6b78785900aa4605ee53975cbf2"} Feb 16 21:36:20.221726 master-0 kubenswrapper[38936]: I0216 21:36:20.219891 38936 generic.go:334] "Generic (PLEG): container finished" podID="4efd2c15-caba-4f5a-96ba-db5845549510" containerID="fe9a41077891e5a93ba3a823e998aaac405c1add9f8650066ca7e40de97bce40" exitCode=0 Feb 16 21:36:20.221726 master-0 kubenswrapper[38936]: I0216 21:36:20.219984 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" event={"ID":"4efd2c15-caba-4f5a-96ba-db5845549510","Type":"ContainerDied","Data":"fe9a41077891e5a93ba3a823e998aaac405c1add9f8650066ca7e40de97bce40"} Feb 16 21:36:20.221726 master-0 kubenswrapper[38936]: I0216 21:36:20.220096 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" event={"ID":"4efd2c15-caba-4f5a-96ba-db5845549510","Type":"ContainerStarted","Data":"aee18587c962fb4d37085fdbb3e7bc7156e80d0c3c29920fadbf412c65d5600b"} Feb 16 21:36:20.228178 master-0 kubenswrapper[38936]: I0216 21:36:20.228087 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eb77fc89-6e9b-438d-8fb7-c87367a747b0","Type":"ContainerStarted","Data":"d6a29ff9b9404e074ddfcc53e2d26aa96a49e8df3adc87c51b377b8c0ce214b2"} Feb 16 21:36:20.231996 master-0 kubenswrapper[38936]: I0216 21:36:20.231938 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"127e35ba-a64d-4e41-b803-54c9e9bd526d","Type":"ContainerStarted","Data":"cec244bb3c68650b268f32aef56dcdb901faa68f50c60255c47a8f0ab1b1e666"} Feb 16 21:36:20.236576 master-0 kubenswrapper[38936]: I0216 21:36:20.236342 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nhtlw" event={"ID":"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab","Type":"ContainerStarted","Data":"6683ff21325ecd8bd95b5decc9887e1f0e7f92eb19bc7b20e308a25f2c1a37b2"} Feb 16 21:36:20.236627 master-0 kubenswrapper[38936]: I0216 21:36:20.236596 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nhtlw" event={"ID":"d3534c37-e4ab-4a9b-b6e2-578dbd28bfab","Type":"ContainerStarted","Data":"166df95a7bff0e1f7fdf3a049279a2cc05526b98d102eef5a1c5e9c2d7725d33"} Feb 16 21:36:20.246065 master-0 kubenswrapper[38936]: I0216 21:36:20.245910 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.6744223 podStartE2EDuration="34.245871224s" podCreationTimestamp="2026-02-16 21:35:46 +0000 UTC" firstStartedPulling="2026-02-16 21:36:05.622266365 +0000 UTC m=+795.974269727" lastFinishedPulling="2026-02-16 21:36:13.193715289 +0000 UTC m=+803.545718651" observedRunningTime="2026-02-16 21:36:20.239984796 +0000 UTC m=+810.591988388" watchObservedRunningTime="2026-02-16 21:36:20.245871224 +0000 UTC m=+810.597874576" Feb 16 21:36:20.289929 master-0 kubenswrapper[38936]: I0216 21:36:20.285817 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-nhtlw" podStartSLOduration=2.2857940230000002 podStartE2EDuration="2.285794023s" podCreationTimestamp="2026-02-16 21:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:36:20.26571354 +0000 UTC m=+810.617716902" watchObservedRunningTime="2026-02-16 21:36:20.285794023 +0000 UTC m=+810.637797385" Feb 16 21:36:20.335670 master-0 kubenswrapper[38936]: I0216 21:36:20.330148 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.804072873 podStartE2EDuration="32.3301257s" podCreationTimestamp="2026-02-16 21:35:48 +0000 UTC" firstStartedPulling="2026-02-16 21:36:04.634159767 +0000 UTC m=+794.986163139" lastFinishedPulling="2026-02-16 21:36:13.160212604 +0000 UTC m=+803.512215966" observedRunningTime="2026-02-16 21:36:20.300803508 +0000 UTC m=+810.652806870" watchObservedRunningTime="2026-02-16 21:36:20.3301257 +0000 UTC m=+810.682129052" Feb 16 21:36:20.335670 master-0 kubenswrapper[38936]: I0216 21:36:20.334019 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 21:36:20.690734 master-0 kubenswrapper[38936]: I0216 21:36:20.690689 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:20.856130 master-0 kubenswrapper[38936]: I0216 21:36:20.856066 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-config\") pod \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " Feb 16 21:36:20.856350 master-0 kubenswrapper[38936]: I0216 21:36:20.856191 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-ovsdbserver-nb\") pod \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " Feb 16 21:36:20.856350 master-0 kubenswrapper[38936]: I0216 21:36:20.856233 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z2cb\" (UniqueName: \"kubernetes.io/projected/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-kube-api-access-6z2cb\") pod \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " Feb 16 21:36:20.856350 master-0 kubenswrapper[38936]: I0216 21:36:20.856267 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-dns-svc\") pod \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\" (UID: \"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c\") " Feb 16 21:36:20.872691 master-0 kubenswrapper[38936]: I0216 21:36:20.861233 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-kube-api-access-6z2cb" (OuterVolumeSpecName: "kube-api-access-6z2cb") pod "9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c" (UID: "9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c"). InnerVolumeSpecName "kube-api-access-6z2cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:20.889683 master-0 kubenswrapper[38936]: I0216 21:36:20.880337 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-config" (OuterVolumeSpecName: "config") pod "9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c" (UID: "9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:20.889683 master-0 kubenswrapper[38936]: I0216 21:36:20.881773 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c" (UID: "9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:20.889683 master-0 kubenswrapper[38936]: I0216 21:36:20.885714 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c" (UID: "9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:20.960750 master-0 kubenswrapper[38936]: I0216 21:36:20.960248 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:20.960750 master-0 kubenswrapper[38936]: I0216 21:36:20.960315 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:20.960750 master-0 kubenswrapper[38936]: I0216 21:36:20.960335 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z2cb\" (UniqueName: \"kubernetes.io/projected/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-kube-api-access-6z2cb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:20.960750 master-0 kubenswrapper[38936]: I0216 21:36:20.960346 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:21.270100 master-0 kubenswrapper[38936]: I0216 21:36:21.269984 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" event={"ID":"9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c","Type":"ContainerDied","Data":"43e5bc48264752e8bd41cb27e90be01ac639c6b78785900aa4605ee53975cbf2"} Feb 16 21:36:21.270609 master-0 kubenswrapper[38936]: I0216 21:36:21.270579 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-8bjc6" Feb 16 21:36:21.270753 master-0 kubenswrapper[38936]: I0216 21:36:21.270581 38936 scope.go:117] "RemoveContainer" containerID="6d0f4b4f0304f24302603125e6c72fbf08eb9e33f83fe66aac2a1d0e8c1ea2fd" Feb 16 21:36:21.276433 master-0 kubenswrapper[38936]: I0216 21:36:21.276317 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" event={"ID":"4efd2c15-caba-4f5a-96ba-db5845549510","Type":"ContainerStarted","Data":"769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641"} Feb 16 21:36:21.276433 master-0 kubenswrapper[38936]: I0216 21:36:21.276407 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:21.280373 master-0 kubenswrapper[38936]: I0216 21:36:21.279849 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"127e35ba-a64d-4e41-b803-54c9e9bd526d","Type":"ContainerStarted","Data":"09b48f1b2ebd60661406b200b653d2f2abebbf00da0e34fec793f2eedc9e6301"} Feb 16 21:36:21.300911 master-0 kubenswrapper[38936]: I0216 21:36:21.300842 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" podStartSLOduration=3.300823232 podStartE2EDuration="3.300823232s" podCreationTimestamp="2026-02-16 21:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:36:21.29634103 +0000 UTC m=+811.648344392" watchObservedRunningTime="2026-02-16 21:36:21.300823232 +0000 UTC m=+811.652826594" Feb 16 21:36:21.377910 master-0 kubenswrapper[38936]: I0216 21:36:21.377774 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-8bjc6"] Feb 16 21:36:21.387075 master-0 kubenswrapper[38936]: I0216 21:36:21.387029 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-8bjc6"] Feb 16 21:36:21.894442 master-0 kubenswrapper[38936]: I0216 21:36:21.894378 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c" path="/var/lib/kubelet/pods/9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c/volumes" Feb 16 21:36:22.344672 master-0 kubenswrapper[38936]: I0216 21:36:22.340396 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"127e35ba-a64d-4e41-b803-54c9e9bd526d","Type":"ContainerStarted","Data":"e55a499544fc6a49bb7c3b1a5b9bc74b19be88ad12800c9b888f65bfe61ad1c2"} Feb 16 21:36:22.344672 master-0 kubenswrapper[38936]: I0216 21:36:22.341833 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 21:36:22.352667 master-0 kubenswrapper[38936]: I0216 21:36:22.346322 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-7fnhx"] Feb 16 21:36:22.374340 master-0 kubenswrapper[38936]: I0216 21:36:22.373466 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-n7glt"] Feb 16 21:36:22.374340 master-0 kubenswrapper[38936]: E0216 21:36:22.374014 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c" containerName="init" Feb 16 21:36:22.374340 master-0 kubenswrapper[38936]: I0216 21:36:22.374028 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c" containerName="init" Feb 16 21:36:22.374340 master-0 kubenswrapper[38936]: I0216 21:36:22.374223 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cdd3b3b-d20c-4250-9c02-1a2e3ad11a7c" containerName="init" Feb 16 21:36:22.376346 master-0 kubenswrapper[38936]: I0216 21:36:22.376304 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.398668 master-0 kubenswrapper[38936]: I0216 21:36:22.397750 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-n7glt"] Feb 16 21:36:22.418495 master-0 kubenswrapper[38936]: I0216 21:36:22.407147 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.1715547 podStartE2EDuration="4.407122995s" podCreationTimestamp="2026-02-16 21:36:18 +0000 UTC" firstStartedPulling="2026-02-16 21:36:19.777618276 +0000 UTC m=+810.129621638" lastFinishedPulling="2026-02-16 21:36:21.013186561 +0000 UTC m=+811.365189933" observedRunningTime="2026-02-16 21:36:22.396444046 +0000 UTC m=+812.748447408" watchObservedRunningTime="2026-02-16 21:36:22.407122995 +0000 UTC m=+812.759126357" Feb 16 21:36:22.418495 master-0 kubenswrapper[38936]: I0216 21:36:22.416938 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-dns-svc\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.418495 master-0 kubenswrapper[38936]: I0216 21:36:22.417004 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-config\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.418495 master-0 kubenswrapper[38936]: I0216 21:36:22.417232 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v85g7\" (UniqueName: \"kubernetes.io/projected/3037cb65-febb-4854-a8ae-8f8c182a3e64-kube-api-access-v85g7\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.418495 master-0 kubenswrapper[38936]: I0216 21:36:22.417308 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.418495 master-0 kubenswrapper[38936]: I0216 21:36:22.417398 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.521167 master-0 kubenswrapper[38936]: I0216 21:36:22.521096 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v85g7\" (UniqueName: \"kubernetes.io/projected/3037cb65-febb-4854-a8ae-8f8c182a3e64-kube-api-access-v85g7\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.521167 master-0 kubenswrapper[38936]: I0216 21:36:22.521170 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.521421 master-0 kubenswrapper[38936]: I0216 21:36:22.521214 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.521421 master-0 kubenswrapper[38936]: I0216 21:36:22.521342 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-dns-svc\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.521421 master-0 kubenswrapper[38936]: I0216 21:36:22.521373 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-config\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.522335 master-0 kubenswrapper[38936]: I0216 21:36:22.522303 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-config\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.525575 master-0 kubenswrapper[38936]: I0216 21:36:22.525546 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.526867 master-0 kubenswrapper[38936]: I0216 21:36:22.526827 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.528022 master-0 kubenswrapper[38936]: I0216 21:36:22.527986 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-dns-svc\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.545600 master-0 kubenswrapper[38936]: I0216 21:36:22.545562 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v85g7\" (UniqueName: \"kubernetes.io/projected/3037cb65-febb-4854-a8ae-8f8c182a3e64-kube-api-access-v85g7\") pod \"dnsmasq-dns-6fd49994df-n7glt\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:22.724367 master-0 kubenswrapper[38936]: I0216 21:36:22.724305 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:23.227855 master-0 kubenswrapper[38936]: I0216 21:36:23.225960 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 21:36:23.228638 master-0 kubenswrapper[38936]: I0216 21:36:23.228582 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 21:36:23.272908 master-0 kubenswrapper[38936]: I0216 21:36:23.271675 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-n7glt"] Feb 16 21:36:23.279722 master-0 kubenswrapper[38936]: W0216 21:36:23.279641 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3037cb65_febb_4854_a8ae_8f8c182a3e64.slice/crio-9592c1279109e2f05ec25603cd9717bd15e0ce40c33de3bd76aa7b6cdf1a9c76 WatchSource:0}: Error finding container 9592c1279109e2f05ec25603cd9717bd15e0ce40c33de3bd76aa7b6cdf1a9c76: Status 404 returned error can't find the container with id 9592c1279109e2f05ec25603cd9717bd15e0ce40c33de3bd76aa7b6cdf1a9c76 Feb 16 21:36:23.404928 master-0 kubenswrapper[38936]: I0216 21:36:23.404850 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" event={"ID":"3037cb65-febb-4854-a8ae-8f8c182a3e64","Type":"ContainerStarted","Data":"9592c1279109e2f05ec25603cd9717bd15e0ce40c33de3bd76aa7b6cdf1a9c76"} Feb 16 21:36:23.406129 master-0 kubenswrapper[38936]: I0216 21:36:23.406051 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" podUID="4efd2c15-caba-4f5a-96ba-db5845549510" containerName="dnsmasq-dns" containerID="cri-o://769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641" gracePeriod=10 Feb 16 21:36:23.974904 master-0 kubenswrapper[38936]: I0216 21:36:23.974306 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:24.128454 master-0 kubenswrapper[38936]: I0216 21:36:24.128130 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 21:36:24.128454 master-0 kubenswrapper[38936]: I0216 21:36:24.128225 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 21:36:24.170588 master-0 kubenswrapper[38936]: I0216 21:36:24.170347 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-sb\") pod \"4efd2c15-caba-4f5a-96ba-db5845549510\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " Feb 16 21:36:24.170810 master-0 kubenswrapper[38936]: I0216 21:36:24.170708 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6h6v\" (UniqueName: \"kubernetes.io/projected/4efd2c15-caba-4f5a-96ba-db5845549510-kube-api-access-s6h6v\") pod \"4efd2c15-caba-4f5a-96ba-db5845549510\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " Feb 16 21:36:24.170873 master-0 kubenswrapper[38936]: I0216 21:36:24.170851 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-dns-svc\") pod \"4efd2c15-caba-4f5a-96ba-db5845549510\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " Feb 16 21:36:24.170955 master-0 kubenswrapper[38936]: I0216 21:36:24.170919 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-nb\") pod \"4efd2c15-caba-4f5a-96ba-db5845549510\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " Feb 16 21:36:24.171074 master-0 kubenswrapper[38936]: I0216 21:36:24.171027 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-config\") pod \"4efd2c15-caba-4f5a-96ba-db5845549510\" (UID: \"4efd2c15-caba-4f5a-96ba-db5845549510\") " Feb 16 21:36:24.177944 master-0 kubenswrapper[38936]: I0216 21:36:24.177864 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4efd2c15-caba-4f5a-96ba-db5845549510-kube-api-access-s6h6v" (OuterVolumeSpecName: "kube-api-access-s6h6v") pod "4efd2c15-caba-4f5a-96ba-db5845549510" (UID: "4efd2c15-caba-4f5a-96ba-db5845549510"). InnerVolumeSpecName "kube-api-access-s6h6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:24.218508 master-0 kubenswrapper[38936]: I0216 21:36:24.218405 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4efd2c15-caba-4f5a-96ba-db5845549510" (UID: "4efd2c15-caba-4f5a-96ba-db5845549510"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:24.221069 master-0 kubenswrapper[38936]: I0216 21:36:24.221003 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4efd2c15-caba-4f5a-96ba-db5845549510" (UID: "4efd2c15-caba-4f5a-96ba-db5845549510"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:24.221522 master-0 kubenswrapper[38936]: I0216 21:36:24.221433 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4efd2c15-caba-4f5a-96ba-db5845549510" (UID: "4efd2c15-caba-4f5a-96ba-db5845549510"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:24.227951 master-0 kubenswrapper[38936]: I0216 21:36:24.227903 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-config" (OuterVolumeSpecName: "config") pod "4efd2c15-caba-4f5a-96ba-db5845549510" (UID: "4efd2c15-caba-4f5a-96ba-db5845549510"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:24.273558 master-0 kubenswrapper[38936]: I0216 21:36:24.273491 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6h6v\" (UniqueName: \"kubernetes.io/projected/4efd2c15-caba-4f5a-96ba-db5845549510-kube-api-access-s6h6v\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:24.273558 master-0 kubenswrapper[38936]: I0216 21:36:24.273545 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:24.273558 master-0 kubenswrapper[38936]: I0216 21:36:24.273558 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:24.273558 master-0 kubenswrapper[38936]: I0216 21:36:24.273571 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:24.273879 master-0 kubenswrapper[38936]: I0216 21:36:24.273586 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4efd2c15-caba-4f5a-96ba-db5845549510-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:24.321088 master-0 kubenswrapper[38936]: I0216 21:36:24.321016 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:36:24.321576 master-0 kubenswrapper[38936]: E0216 21:36:24.321504 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efd2c15-caba-4f5a-96ba-db5845549510" containerName="dnsmasq-dns" Feb 16 21:36:24.321576 master-0 kubenswrapper[38936]: I0216 21:36:24.321524 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efd2c15-caba-4f5a-96ba-db5845549510" containerName="dnsmasq-dns" Feb 16 21:36:24.321576 master-0 kubenswrapper[38936]: E0216 21:36:24.321577 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efd2c15-caba-4f5a-96ba-db5845549510" containerName="init" Feb 16 21:36:24.321576 master-0 kubenswrapper[38936]: I0216 21:36:24.321585 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efd2c15-caba-4f5a-96ba-db5845549510" containerName="init" Feb 16 21:36:24.321921 master-0 kubenswrapper[38936]: I0216 21:36:24.321798 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="4efd2c15-caba-4f5a-96ba-db5845549510" containerName="dnsmasq-dns" Feb 16 21:36:24.330899 master-0 kubenswrapper[38936]: I0216 21:36:24.330721 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 21:36:24.334754 master-0 kubenswrapper[38936]: I0216 21:36:24.332935 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 21:36:24.334754 master-0 kubenswrapper[38936]: I0216 21:36:24.333263 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 21:36:24.334754 master-0 kubenswrapper[38936]: I0216 21:36:24.334071 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 21:36:24.342737 master-0 kubenswrapper[38936]: I0216 21:36:24.342600 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:36:24.433736 master-0 kubenswrapper[38936]: I0216 21:36:24.432391 38936 generic.go:334] "Generic (PLEG): container finished" podID="3037cb65-febb-4854-a8ae-8f8c182a3e64" containerID="a43ee4e65c0c547597917c1e95598d3467bcbdd3d990805c34b303caa3bf5378" exitCode=0 Feb 16 21:36:24.433736 master-0 kubenswrapper[38936]: I0216 21:36:24.432467 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" event={"ID":"3037cb65-febb-4854-a8ae-8f8c182a3e64","Type":"ContainerDied","Data":"a43ee4e65c0c547597917c1e95598d3467bcbdd3d990805c34b303caa3bf5378"} Feb 16 21:36:24.436500 master-0 kubenswrapper[38936]: I0216 21:36:24.436446 38936 generic.go:334] "Generic (PLEG): container finished" podID="4efd2c15-caba-4f5a-96ba-db5845549510" containerID="769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641" exitCode=0 Feb 16 21:36:24.436692 master-0 kubenswrapper[38936]: I0216 21:36:24.436636 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" event={"ID":"4efd2c15-caba-4f5a-96ba-db5845549510","Type":"ContainerDied","Data":"769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641"} Feb 16 21:36:24.436737 master-0 kubenswrapper[38936]: I0216 21:36:24.436706 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" event={"ID":"4efd2c15-caba-4f5a-96ba-db5845549510","Type":"ContainerDied","Data":"aee18587c962fb4d37085fdbb3e7bc7156e80d0c3c29920fadbf412c65d5600b"} Feb 16 21:36:24.436737 master-0 kubenswrapper[38936]: I0216 21:36:24.436733 38936 scope.go:117] "RemoveContainer" containerID="769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641" Feb 16 21:36:24.440843 master-0 kubenswrapper[38936]: I0216 21:36:24.440808 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-7fnhx" Feb 16 21:36:24.480082 master-0 kubenswrapper[38936]: I0216 21:36:24.479634 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/73f726c9-e2b2-4038-a202-5df2ede23bf5-lock\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.480082 master-0 kubenswrapper[38936]: I0216 21:36:24.479832 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.480082 master-0 kubenswrapper[38936]: I0216 21:36:24.479960 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrd5r\" (UniqueName: \"kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-kube-api-access-jrd5r\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.480082 master-0 kubenswrapper[38936]: I0216 21:36:24.480006 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73f726c9-e2b2-4038-a202-5df2ede23bf5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.480247 master-0 kubenswrapper[38936]: I0216 21:36:24.480096 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ea20a318-9c32-4cc9-8864-0ae1ff48ca4d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e2f189dd-cebd-48e7-8897-47576d39a0be\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.481735 master-0 kubenswrapper[38936]: I0216 21:36:24.480459 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/73f726c9-e2b2-4038-a202-5df2ede23bf5-cache\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.582870 master-0 kubenswrapper[38936]: I0216 21:36:24.582815 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/73f726c9-e2b2-4038-a202-5df2ede23bf5-lock\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.583093 master-0 kubenswrapper[38936]: I0216 21:36:24.582945 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.583297 master-0 kubenswrapper[38936]: I0216 21:36:24.583269 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrd5r\" (UniqueName: \"kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-kube-api-access-jrd5r\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.583345 master-0 kubenswrapper[38936]: I0216 21:36:24.583300 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73f726c9-e2b2-4038-a202-5df2ede23bf5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.583390 master-0 kubenswrapper[38936]: I0216 21:36:24.583358 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ea20a318-9c32-4cc9-8864-0ae1ff48ca4d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e2f189dd-cebd-48e7-8897-47576d39a0be\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.583434 master-0 kubenswrapper[38936]: I0216 21:36:24.583389 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/73f726c9-e2b2-4038-a202-5df2ede23bf5-cache\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.583515 master-0 kubenswrapper[38936]: I0216 21:36:24.583480 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/73f726c9-e2b2-4038-a202-5df2ede23bf5-lock\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.583574 master-0 kubenswrapper[38936]: E0216 21:36:24.583498 38936 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:36:24.583574 master-0 kubenswrapper[38936]: E0216 21:36:24.583535 38936 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:36:24.583677 master-0 kubenswrapper[38936]: E0216 21:36:24.583585 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift podName:73f726c9-e2b2-4038-a202-5df2ede23bf5 nodeName:}" failed. No retries permitted until 2026-02-16 21:36:25.083567336 +0000 UTC m=+815.435570698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift") pod "swift-storage-0" (UID: "73f726c9-e2b2-4038-a202-5df2ede23bf5") : configmap "swift-ring-files" not found Feb 16 21:36:24.583968 master-0 kubenswrapper[38936]: I0216 21:36:24.583942 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/73f726c9-e2b2-4038-a202-5df2ede23bf5-cache\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.588450 master-0 kubenswrapper[38936]: I0216 21:36:24.588406 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73f726c9-e2b2-4038-a202-5df2ede23bf5-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.591298 master-0 kubenswrapper[38936]: I0216 21:36:24.591268 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:36:24.591357 master-0 kubenswrapper[38936]: I0216 21:36:24.591300 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ea20a318-9c32-4cc9-8864-0ae1ff48ca4d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e2f189dd-cebd-48e7-8897-47576d39a0be\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/0a2f838a68ec2ddf6a90b749c3da9ede7eb2807a02184844d185c34d97f338f8/globalmount\"" pod="openstack/swift-storage-0" Feb 16 21:36:24.606896 master-0 kubenswrapper[38936]: I0216 21:36:24.606087 38936 scope.go:117] "RemoveContainer" containerID="fe9a41077891e5a93ba3a823e998aaac405c1add9f8650066ca7e40de97bce40" Feb 16 21:36:24.615748 master-0 kubenswrapper[38936]: I0216 21:36:24.615674 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrd5r\" (UniqueName: \"kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-kube-api-access-jrd5r\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:24.676201 master-0 kubenswrapper[38936]: I0216 21:36:24.674671 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-7fnhx"] Feb 16 21:36:24.676447 master-0 kubenswrapper[38936]: I0216 21:36:24.676362 38936 scope.go:117] "RemoveContainer" containerID="769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641" Feb 16 21:36:24.677001 master-0 kubenswrapper[38936]: E0216 21:36:24.676957 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641\": container with ID starting with 769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641 not found: ID does not exist" containerID="769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641" Feb 16 21:36:24.677078 master-0 kubenswrapper[38936]: I0216 21:36:24.677006 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641"} err="failed to get container status \"769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641\": rpc error: code = NotFound desc = could not find container \"769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641\": container with ID starting with 769fba01656d1cf88a410fd30c9dcd3d39ac8a7922672597b1ea6da8c448b641 not found: ID does not exist" Feb 16 21:36:24.677078 master-0 kubenswrapper[38936]: I0216 21:36:24.677041 38936 scope.go:117] "RemoveContainer" containerID="fe9a41077891e5a93ba3a823e998aaac405c1add9f8650066ca7e40de97bce40" Feb 16 21:36:24.677624 master-0 kubenswrapper[38936]: E0216 21:36:24.677590 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9a41077891e5a93ba3a823e998aaac405c1add9f8650066ca7e40de97bce40\": container with ID starting with fe9a41077891e5a93ba3a823e998aaac405c1add9f8650066ca7e40de97bce40 not found: ID does not exist" containerID="fe9a41077891e5a93ba3a823e998aaac405c1add9f8650066ca7e40de97bce40" Feb 16 21:36:24.677703 master-0 kubenswrapper[38936]: I0216 21:36:24.677626 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9a41077891e5a93ba3a823e998aaac405c1add9f8650066ca7e40de97bce40"} err="failed to get container status \"fe9a41077891e5a93ba3a823e998aaac405c1add9f8650066ca7e40de97bce40\": rpc error: code = NotFound desc = could not find container \"fe9a41077891e5a93ba3a823e998aaac405c1add9f8650066ca7e40de97bce40\": container with ID starting with fe9a41077891e5a93ba3a823e998aaac405c1add9f8650066ca7e40de97bce40 not found: ID does not exist" Feb 16 21:36:24.686620 master-0 kubenswrapper[38936]: I0216 21:36:24.686498 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-7fnhx"] Feb 16 21:36:25.095976 master-0 kubenswrapper[38936]: I0216 21:36:25.095892 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:25.096242 master-0 kubenswrapper[38936]: E0216 21:36:25.096176 38936 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:36:25.096242 master-0 kubenswrapper[38936]: E0216 21:36:25.096242 38936 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:36:25.096363 master-0 kubenswrapper[38936]: E0216 21:36:25.096340 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift podName:73f726c9-e2b2-4038-a202-5df2ede23bf5 nodeName:}" failed. No retries permitted until 2026-02-16 21:36:26.096311546 +0000 UTC m=+816.448314908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift") pod "swift-storage-0" (UID: "73f726c9-e2b2-4038-a202-5df2ede23bf5") : configmap "swift-ring-files" not found Feb 16 21:36:25.370762 master-0 kubenswrapper[38936]: I0216 21:36:25.370581 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-l6dz5"] Feb 16 21:36:25.372526 master-0 kubenswrapper[38936]: I0216 21:36:25.372490 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.376743 master-0 kubenswrapper[38936]: I0216 21:36:25.376639 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 21:36:25.377027 master-0 kubenswrapper[38936]: I0216 21:36:25.377000 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 21:36:25.377256 master-0 kubenswrapper[38936]: I0216 21:36:25.377216 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 21:36:25.444327 master-0 kubenswrapper[38936]: I0216 21:36:25.444273 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-l6dz5"] Feb 16 21:36:25.452440 master-0 kubenswrapper[38936]: I0216 21:36:25.452375 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" event={"ID":"3037cb65-febb-4854-a8ae-8f8c182a3e64","Type":"ContainerStarted","Data":"a668323f554aed8085c55ed8673b6bf216417bcd1d5031adc649c3ed6ef28132"} Feb 16 21:36:25.453765 master-0 kubenswrapper[38936]: I0216 21:36:25.453743 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:25.501670 master-0 kubenswrapper[38936]: I0216 21:36:25.494785 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" podStartSLOduration=3.494748899 podStartE2EDuration="3.494748899s" podCreationTimestamp="2026-02-16 21:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:36:25.490629538 +0000 UTC m=+815.842632920" watchObservedRunningTime="2026-02-16 21:36:25.494748899 +0000 UTC m=+815.846752261" Feb 16 21:36:25.516668 master-0 kubenswrapper[38936]: I0216 21:36:25.507963 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-ring-data-devices\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.516668 master-0 kubenswrapper[38936]: I0216 21:36:25.508068 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-dispersionconf\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.516668 master-0 kubenswrapper[38936]: I0216 21:36:25.508114 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/67cebf05-d1da-4a45-aef9-8366546424a5-etc-swift\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.516668 master-0 kubenswrapper[38936]: I0216 21:36:25.508178 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-swiftconf\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.516668 master-0 kubenswrapper[38936]: I0216 21:36:25.508225 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc76p\" (UniqueName: \"kubernetes.io/projected/67cebf05-d1da-4a45-aef9-8366546424a5-kube-api-access-bc76p\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.516668 master-0 kubenswrapper[38936]: I0216 21:36:25.508282 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-combined-ca-bundle\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.516668 master-0 kubenswrapper[38936]: I0216 21:36:25.508306 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-scripts\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.610801 master-0 kubenswrapper[38936]: I0216 21:36:25.610709 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-combined-ca-bundle\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.610801 master-0 kubenswrapper[38936]: I0216 21:36:25.610805 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-scripts\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.611121 master-0 kubenswrapper[38936]: I0216 21:36:25.610900 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-ring-data-devices\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.611121 master-0 kubenswrapper[38936]: I0216 21:36:25.610955 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-dispersionconf\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.611121 master-0 kubenswrapper[38936]: I0216 21:36:25.611031 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/67cebf05-d1da-4a45-aef9-8366546424a5-etc-swift\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.611285 master-0 kubenswrapper[38936]: I0216 21:36:25.611140 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-swiftconf\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.611285 master-0 kubenswrapper[38936]: I0216 21:36:25.611189 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc76p\" (UniqueName: \"kubernetes.io/projected/67cebf05-d1da-4a45-aef9-8366546424a5-kube-api-access-bc76p\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.622601 master-0 kubenswrapper[38936]: I0216 21:36:25.618305 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/67cebf05-d1da-4a45-aef9-8366546424a5-etc-swift\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.622601 master-0 kubenswrapper[38936]: I0216 21:36:25.619392 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-ring-data-devices\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.622601 master-0 kubenswrapper[38936]: I0216 21:36:25.619988 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-scripts\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.631967 master-0 kubenswrapper[38936]: I0216 21:36:25.631110 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-swiftconf\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.635422 master-0 kubenswrapper[38936]: I0216 21:36:25.633826 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-dispersionconf\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.657686 master-0 kubenswrapper[38936]: I0216 21:36:25.648438 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc76p\" (UniqueName: \"kubernetes.io/projected/67cebf05-d1da-4a45-aef9-8366546424a5-kube-api-access-bc76p\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.657686 master-0 kubenswrapper[38936]: I0216 21:36:25.650525 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-combined-ca-bundle\") pod \"swift-ring-rebalance-l6dz5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.698705 master-0 kubenswrapper[38936]: I0216 21:36:25.695326 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:25.719730 master-0 kubenswrapper[38936]: I0216 21:36:25.718996 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 21:36:25.861367 master-0 kubenswrapper[38936]: I0216 21:36:25.861310 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 21:36:25.898852 master-0 kubenswrapper[38936]: I0216 21:36:25.898761 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4efd2c15-caba-4f5a-96ba-db5845549510" path="/var/lib/kubelet/pods/4efd2c15-caba-4f5a-96ba-db5845549510/volumes" Feb 16 21:36:25.991595 master-0 kubenswrapper[38936]: I0216 21:36:25.991531 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ea20a318-9c32-4cc9-8864-0ae1ff48ca4d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e2f189dd-cebd-48e7-8897-47576d39a0be\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:26.137956 master-0 kubenswrapper[38936]: I0216 21:36:26.137819 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:26.138271 master-0 kubenswrapper[38936]: E0216 21:36:26.138036 38936 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:36:26.138271 master-0 kubenswrapper[38936]: E0216 21:36:26.138085 38936 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:36:26.138271 master-0 kubenswrapper[38936]: E0216 21:36:26.138163 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift podName:73f726c9-e2b2-4038-a202-5df2ede23bf5 nodeName:}" failed. No retries permitted until 2026-02-16 21:36:28.138143608 +0000 UTC m=+818.490146970 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift") pod "swift-storage-0" (UID: "73f726c9-e2b2-4038-a202-5df2ede23bf5") : configmap "swift-ring-files" not found Feb 16 21:36:26.346932 master-0 kubenswrapper[38936]: I0216 21:36:26.346858 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-l6dz5"] Feb 16 21:36:26.478096 master-0 kubenswrapper[38936]: I0216 21:36:26.477822 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l6dz5" event={"ID":"67cebf05-d1da-4a45-aef9-8366546424a5","Type":"ContainerStarted","Data":"9608fd4cdd9720abae097c638d582b75c139e3ba28cd24ddff4290f364a31e4d"} Feb 16 21:36:26.579191 master-0 kubenswrapper[38936]: I0216 21:36:26.579076 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 21:36:26.667081 master-0 kubenswrapper[38936]: I0216 21:36:26.667013 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 21:36:26.834691 master-0 kubenswrapper[38936]: I0216 21:36:26.834597 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6cmqp"] Feb 16 21:36:26.836478 master-0 kubenswrapper[38936]: I0216 21:36:26.836046 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6cmqp" Feb 16 21:36:26.839836 master-0 kubenswrapper[38936]: I0216 21:36:26.839790 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 21:36:26.858202 master-0 kubenswrapper[38936]: I0216 21:36:26.857603 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwsg6\" (UniqueName: \"kubernetes.io/projected/0afbc19b-fbcc-43e9-907f-e819b5865ee6-kube-api-access-zwsg6\") pod \"root-account-create-update-6cmqp\" (UID: \"0afbc19b-fbcc-43e9-907f-e819b5865ee6\") " pod="openstack/root-account-create-update-6cmqp" Feb 16 21:36:26.858202 master-0 kubenswrapper[38936]: I0216 21:36:26.857986 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0afbc19b-fbcc-43e9-907f-e819b5865ee6-operator-scripts\") pod \"root-account-create-update-6cmqp\" (UID: \"0afbc19b-fbcc-43e9-907f-e819b5865ee6\") " pod="openstack/root-account-create-update-6cmqp" Feb 16 21:36:26.859703 master-0 kubenswrapper[38936]: I0216 21:36:26.859577 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6cmqp"] Feb 16 21:36:26.961858 master-0 kubenswrapper[38936]: I0216 21:36:26.961796 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0afbc19b-fbcc-43e9-907f-e819b5865ee6-operator-scripts\") pod \"root-account-create-update-6cmqp\" (UID: \"0afbc19b-fbcc-43e9-907f-e819b5865ee6\") " pod="openstack/root-account-create-update-6cmqp" Feb 16 21:36:26.963001 master-0 kubenswrapper[38936]: I0216 21:36:26.962922 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwsg6\" (UniqueName: \"kubernetes.io/projected/0afbc19b-fbcc-43e9-907f-e819b5865ee6-kube-api-access-zwsg6\") pod \"root-account-create-update-6cmqp\" (UID: \"0afbc19b-fbcc-43e9-907f-e819b5865ee6\") " pod="openstack/root-account-create-update-6cmqp" Feb 16 21:36:26.963382 master-0 kubenswrapper[38936]: I0216 21:36:26.963116 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0afbc19b-fbcc-43e9-907f-e819b5865ee6-operator-scripts\") pod \"root-account-create-update-6cmqp\" (UID: \"0afbc19b-fbcc-43e9-907f-e819b5865ee6\") " pod="openstack/root-account-create-update-6cmqp" Feb 16 21:36:26.981541 master-0 kubenswrapper[38936]: I0216 21:36:26.980739 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwsg6\" (UniqueName: \"kubernetes.io/projected/0afbc19b-fbcc-43e9-907f-e819b5865ee6-kube-api-access-zwsg6\") pod \"root-account-create-update-6cmqp\" (UID: \"0afbc19b-fbcc-43e9-907f-e819b5865ee6\") " pod="openstack/root-account-create-update-6cmqp" Feb 16 21:36:27.162735 master-0 kubenswrapper[38936]: I0216 21:36:27.162640 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6cmqp" Feb 16 21:36:27.840928 master-0 kubenswrapper[38936]: W0216 21:36:27.840860 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0afbc19b_fbcc_43e9_907f_e819b5865ee6.slice/crio-a724a1000fd6f0194745239bd137d4bd2717948c85ea9602c1acc3cf59879754 WatchSource:0}: Error finding container a724a1000fd6f0194745239bd137d4bd2717948c85ea9602c1acc3cf59879754: Status 404 returned error can't find the container with id a724a1000fd6f0194745239bd137d4bd2717948c85ea9602c1acc3cf59879754 Feb 16 21:36:27.841514 master-0 kubenswrapper[38936]: I0216 21:36:27.841176 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6cmqp"] Feb 16 21:36:28.196286 master-0 kubenswrapper[38936]: I0216 21:36:28.196069 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:28.196557 master-0 kubenswrapper[38936]: E0216 21:36:28.196284 38936 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:36:28.196557 master-0 kubenswrapper[38936]: E0216 21:36:28.196322 38936 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:36:28.196557 master-0 kubenswrapper[38936]: E0216 21:36:28.196385 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift podName:73f726c9-e2b2-4038-a202-5df2ede23bf5 nodeName:}" failed. No retries permitted until 2026-02-16 21:36:32.196364865 +0000 UTC m=+822.548368227 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift") pod "swift-storage-0" (UID: "73f726c9-e2b2-4038-a202-5df2ede23bf5") : configmap "swift-ring-files" not found Feb 16 21:36:28.507100 master-0 kubenswrapper[38936]: I0216 21:36:28.507033 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6cmqp" event={"ID":"0afbc19b-fbcc-43e9-907f-e819b5865ee6","Type":"ContainerStarted","Data":"91f79cc7984d93a5e36521271122c17969f7ef5e80e3bcfe605d7a283ba1cd0d"} Feb 16 21:36:28.507381 master-0 kubenswrapper[38936]: I0216 21:36:28.507107 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6cmqp" event={"ID":"0afbc19b-fbcc-43e9-907f-e819b5865ee6","Type":"ContainerStarted","Data":"a724a1000fd6f0194745239bd137d4bd2717948c85ea9602c1acc3cf59879754"} Feb 16 21:36:28.713531 master-0 kubenswrapper[38936]: I0216 21:36:28.713443 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-6cmqp" podStartSLOduration=2.713422883 podStartE2EDuration="2.713422883s" podCreationTimestamp="2026-02-16 21:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:36:28.708368126 +0000 UTC m=+819.060371488" watchObservedRunningTime="2026-02-16 21:36:28.713422883 +0000 UTC m=+819.065426245" Feb 16 21:36:29.520285 master-0 kubenswrapper[38936]: I0216 21:36:29.520213 38936 generic.go:334] "Generic (PLEG): container finished" podID="0afbc19b-fbcc-43e9-907f-e819b5865ee6" containerID="91f79cc7984d93a5e36521271122c17969f7ef5e80e3bcfe605d7a283ba1cd0d" exitCode=0 Feb 16 21:36:29.520285 master-0 kubenswrapper[38936]: I0216 21:36:29.520274 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6cmqp" event={"ID":"0afbc19b-fbcc-43e9-907f-e819b5865ee6","Type":"ContainerDied","Data":"91f79cc7984d93a5e36521271122c17969f7ef5e80e3bcfe605d7a283ba1cd0d"} Feb 16 21:36:31.044525 master-0 kubenswrapper[38936]: I0216 21:36:31.044004 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6cmqp" Feb 16 21:36:31.169130 master-0 kubenswrapper[38936]: I0216 21:36:31.169054 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-r2xtw"] Feb 16 21:36:31.169716 master-0 kubenswrapper[38936]: E0216 21:36:31.169689 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afbc19b-fbcc-43e9-907f-e819b5865ee6" containerName="mariadb-account-create-update" Feb 16 21:36:31.169716 master-0 kubenswrapper[38936]: I0216 21:36:31.169714 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afbc19b-fbcc-43e9-907f-e819b5865ee6" containerName="mariadb-account-create-update" Feb 16 21:36:31.170055 master-0 kubenswrapper[38936]: I0216 21:36:31.170033 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="0afbc19b-fbcc-43e9-907f-e819b5865ee6" containerName="mariadb-account-create-update" Feb 16 21:36:31.170940 master-0 kubenswrapper[38936]: I0216 21:36:31.170913 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r2xtw" Feb 16 21:36:31.185569 master-0 kubenswrapper[38936]: I0216 21:36:31.185433 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwsg6\" (UniqueName: \"kubernetes.io/projected/0afbc19b-fbcc-43e9-907f-e819b5865ee6-kube-api-access-zwsg6\") pod \"0afbc19b-fbcc-43e9-907f-e819b5865ee6\" (UID: \"0afbc19b-fbcc-43e9-907f-e819b5865ee6\") " Feb 16 21:36:31.185688 master-0 kubenswrapper[38936]: I0216 21:36:31.185669 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0afbc19b-fbcc-43e9-907f-e819b5865ee6-operator-scripts\") pod \"0afbc19b-fbcc-43e9-907f-e819b5865ee6\" (UID: \"0afbc19b-fbcc-43e9-907f-e819b5865ee6\") " Feb 16 21:36:31.186766 master-0 kubenswrapper[38936]: I0216 21:36:31.186465 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0afbc19b-fbcc-43e9-907f-e819b5865ee6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0afbc19b-fbcc-43e9-907f-e819b5865ee6" (UID: "0afbc19b-fbcc-43e9-907f-e819b5865ee6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:31.188837 master-0 kubenswrapper[38936]: I0216 21:36:31.188786 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0afbc19b-fbcc-43e9-907f-e819b5865ee6-kube-api-access-zwsg6" (OuterVolumeSpecName: "kube-api-access-zwsg6") pod "0afbc19b-fbcc-43e9-907f-e819b5865ee6" (UID: "0afbc19b-fbcc-43e9-907f-e819b5865ee6"). InnerVolumeSpecName "kube-api-access-zwsg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:31.288947 master-0 kubenswrapper[38936]: I0216 21:36:31.288861 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf7jm\" (UniqueName: \"kubernetes.io/projected/ff2be0c4-7a42-48ef-801c-8b4422008927-kube-api-access-rf7jm\") pod \"glance-db-create-r2xtw\" (UID: \"ff2be0c4-7a42-48ef-801c-8b4422008927\") " pod="openstack/glance-db-create-r2xtw" Feb 16 21:36:31.289207 master-0 kubenswrapper[38936]: I0216 21:36:31.288987 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff2be0c4-7a42-48ef-801c-8b4422008927-operator-scripts\") pod \"glance-db-create-r2xtw\" (UID: \"ff2be0c4-7a42-48ef-801c-8b4422008927\") " pod="openstack/glance-db-create-r2xtw" Feb 16 21:36:31.289207 master-0 kubenswrapper[38936]: I0216 21:36:31.289076 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwsg6\" (UniqueName: \"kubernetes.io/projected/0afbc19b-fbcc-43e9-907f-e819b5865ee6-kube-api-access-zwsg6\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:31.289207 master-0 kubenswrapper[38936]: I0216 21:36:31.289089 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0afbc19b-fbcc-43e9-907f-e819b5865ee6-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:31.301433 master-0 kubenswrapper[38936]: I0216 21:36:31.301302 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r2xtw"] Feb 16 21:36:31.391401 master-0 kubenswrapper[38936]: I0216 21:36:31.391333 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff2be0c4-7a42-48ef-801c-8b4422008927-operator-scripts\") pod \"glance-db-create-r2xtw\" (UID: \"ff2be0c4-7a42-48ef-801c-8b4422008927\") " pod="openstack/glance-db-create-r2xtw" Feb 16 21:36:31.391643 master-0 kubenswrapper[38936]: I0216 21:36:31.391516 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf7jm\" (UniqueName: \"kubernetes.io/projected/ff2be0c4-7a42-48ef-801c-8b4422008927-kube-api-access-rf7jm\") pod \"glance-db-create-r2xtw\" (UID: \"ff2be0c4-7a42-48ef-801c-8b4422008927\") " pod="openstack/glance-db-create-r2xtw" Feb 16 21:36:31.392817 master-0 kubenswrapper[38936]: I0216 21:36:31.392597 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff2be0c4-7a42-48ef-801c-8b4422008927-operator-scripts\") pod \"glance-db-create-r2xtw\" (UID: \"ff2be0c4-7a42-48ef-801c-8b4422008927\") " pod="openstack/glance-db-create-r2xtw" Feb 16 21:36:31.557245 master-0 kubenswrapper[38936]: I0216 21:36:31.557012 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6cmqp" event={"ID":"0afbc19b-fbcc-43e9-907f-e819b5865ee6","Type":"ContainerDied","Data":"a724a1000fd6f0194745239bd137d4bd2717948c85ea9602c1acc3cf59879754"} Feb 16 21:36:31.557245 master-0 kubenswrapper[38936]: I0216 21:36:31.557100 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a724a1000fd6f0194745239bd137d4bd2717948c85ea9602c1acc3cf59879754" Feb 16 21:36:31.557245 master-0 kubenswrapper[38936]: I0216 21:36:31.557223 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6cmqp" Feb 16 21:36:31.559353 master-0 kubenswrapper[38936]: I0216 21:36:31.559325 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d442-account-create-update-p2dfg"] Feb 16 21:36:31.568795 master-0 kubenswrapper[38936]: I0216 21:36:31.561790 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf7jm\" (UniqueName: \"kubernetes.io/projected/ff2be0c4-7a42-48ef-801c-8b4422008927-kube-api-access-rf7jm\") pod \"glance-db-create-r2xtw\" (UID: \"ff2be0c4-7a42-48ef-801c-8b4422008927\") " pod="openstack/glance-db-create-r2xtw" Feb 16 21:36:31.570598 master-0 kubenswrapper[38936]: I0216 21:36:31.570552 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d442-account-create-update-p2dfg" Feb 16 21:36:31.576447 master-0 kubenswrapper[38936]: I0216 21:36:31.575314 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 21:36:31.605330 master-0 kubenswrapper[38936]: I0216 21:36:31.605236 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-kjwf8"] Feb 16 21:36:31.615023 master-0 kubenswrapper[38936]: I0216 21:36:31.614972 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kjwf8" Feb 16 21:36:31.627323 master-0 kubenswrapper[38936]: I0216 21:36:31.626263 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d442-account-create-update-p2dfg"] Feb 16 21:36:31.639818 master-0 kubenswrapper[38936]: I0216 21:36:31.639764 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kjwf8"] Feb 16 21:36:31.720318 master-0 kubenswrapper[38936]: I0216 21:36:31.720242 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6946c62d-ccec-4c64-bd64-d660f22d7d7a-operator-scripts\") pod \"keystone-db-create-kjwf8\" (UID: \"6946c62d-ccec-4c64-bd64-d660f22d7d7a\") " pod="openstack/keystone-db-create-kjwf8" Feb 16 21:36:31.720635 master-0 kubenswrapper[38936]: I0216 21:36:31.720602 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25b7436-4c82-45df-9e60-d2ceec4544f8-operator-scripts\") pod \"glance-d442-account-create-update-p2dfg\" (UID: \"a25b7436-4c82-45df-9e60-d2ceec4544f8\") " pod="openstack/glance-d442-account-create-update-p2dfg" Feb 16 21:36:31.720740 master-0 kubenswrapper[38936]: I0216 21:36:31.720706 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fn64\" (UniqueName: \"kubernetes.io/projected/a25b7436-4c82-45df-9e60-d2ceec4544f8-kube-api-access-8fn64\") pod \"glance-d442-account-create-update-p2dfg\" (UID: \"a25b7436-4c82-45df-9e60-d2ceec4544f8\") " pod="openstack/glance-d442-account-create-update-p2dfg" Feb 16 21:36:31.720794 master-0 kubenswrapper[38936]: I0216 21:36:31.720743 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdw5j\" (UniqueName: \"kubernetes.io/projected/6946c62d-ccec-4c64-bd64-d660f22d7d7a-kube-api-access-xdw5j\") pod \"keystone-db-create-kjwf8\" (UID: \"6946c62d-ccec-4c64-bd64-d660f22d7d7a\") " pod="openstack/keystone-db-create-kjwf8" Feb 16 21:36:31.762065 master-0 kubenswrapper[38936]: I0216 21:36:31.761988 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-85e2-account-create-update-xh6dm"] Feb 16 21:36:31.764312 master-0 kubenswrapper[38936]: I0216 21:36:31.764263 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-85e2-account-create-update-xh6dm" Feb 16 21:36:31.767143 master-0 kubenswrapper[38936]: I0216 21:36:31.767082 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 21:36:31.792059 master-0 kubenswrapper[38936]: I0216 21:36:31.789782 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-85e2-account-create-update-xh6dm"] Feb 16 21:36:31.792059 master-0 kubenswrapper[38936]: I0216 21:36:31.791580 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r2xtw" Feb 16 21:36:31.822623 master-0 kubenswrapper[38936]: I0216 21:36:31.822274 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6946c62d-ccec-4c64-bd64-d660f22d7d7a-operator-scripts\") pod \"keystone-db-create-kjwf8\" (UID: \"6946c62d-ccec-4c64-bd64-d660f22d7d7a\") " pod="openstack/keystone-db-create-kjwf8" Feb 16 21:36:31.822623 master-0 kubenswrapper[38936]: I0216 21:36:31.822355 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95733bba-8f7b-460f-8793-e85c3f7066c3-operator-scripts\") pod \"keystone-85e2-account-create-update-xh6dm\" (UID: \"95733bba-8f7b-460f-8793-e85c3f7066c3\") " pod="openstack/keystone-85e2-account-create-update-xh6dm" Feb 16 21:36:31.822623 master-0 kubenswrapper[38936]: I0216 21:36:31.822498 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25b7436-4c82-45df-9e60-d2ceec4544f8-operator-scripts\") pod \"glance-d442-account-create-update-p2dfg\" (UID: \"a25b7436-4c82-45df-9e60-d2ceec4544f8\") " pod="openstack/glance-d442-account-create-update-p2dfg" Feb 16 21:36:31.822623 master-0 kubenswrapper[38936]: I0216 21:36:31.822530 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9mv4\" (UniqueName: \"kubernetes.io/projected/95733bba-8f7b-460f-8793-e85c3f7066c3-kube-api-access-j9mv4\") pod \"keystone-85e2-account-create-update-xh6dm\" (UID: \"95733bba-8f7b-460f-8793-e85c3f7066c3\") " pod="openstack/keystone-85e2-account-create-update-xh6dm" Feb 16 21:36:31.822623 master-0 kubenswrapper[38936]: I0216 21:36:31.822590 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fn64\" (UniqueName: \"kubernetes.io/projected/a25b7436-4c82-45df-9e60-d2ceec4544f8-kube-api-access-8fn64\") pod \"glance-d442-account-create-update-p2dfg\" (UID: \"a25b7436-4c82-45df-9e60-d2ceec4544f8\") " pod="openstack/glance-d442-account-create-update-p2dfg" Feb 16 21:36:31.822623 master-0 kubenswrapper[38936]: I0216 21:36:31.822615 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdw5j\" (UniqueName: \"kubernetes.io/projected/6946c62d-ccec-4c64-bd64-d660f22d7d7a-kube-api-access-xdw5j\") pod \"keystone-db-create-kjwf8\" (UID: \"6946c62d-ccec-4c64-bd64-d660f22d7d7a\") " pod="openstack/keystone-db-create-kjwf8" Feb 16 21:36:31.823950 master-0 kubenswrapper[38936]: I0216 21:36:31.823917 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6946c62d-ccec-4c64-bd64-d660f22d7d7a-operator-scripts\") pod \"keystone-db-create-kjwf8\" (UID: \"6946c62d-ccec-4c64-bd64-d660f22d7d7a\") " pod="openstack/keystone-db-create-kjwf8" Feb 16 21:36:31.824520 master-0 kubenswrapper[38936]: I0216 21:36:31.824496 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25b7436-4c82-45df-9e60-d2ceec4544f8-operator-scripts\") pod \"glance-d442-account-create-update-p2dfg\" (UID: \"a25b7436-4c82-45df-9e60-d2ceec4544f8\") " pod="openstack/glance-d442-account-create-update-p2dfg" Feb 16 21:36:31.827416 master-0 kubenswrapper[38936]: I0216 21:36:31.827324 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-cvnf4"] Feb 16 21:36:31.831753 master-0 kubenswrapper[38936]: I0216 21:36:31.831718 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cvnf4" Feb 16 21:36:31.839218 master-0 kubenswrapper[38936]: I0216 21:36:31.839172 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cvnf4"] Feb 16 21:36:31.841853 master-0 kubenswrapper[38936]: I0216 21:36:31.841819 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fn64\" (UniqueName: \"kubernetes.io/projected/a25b7436-4c82-45df-9e60-d2ceec4544f8-kube-api-access-8fn64\") pod \"glance-d442-account-create-update-p2dfg\" (UID: \"a25b7436-4c82-45df-9e60-d2ceec4544f8\") " pod="openstack/glance-d442-account-create-update-p2dfg" Feb 16 21:36:31.848900 master-0 kubenswrapper[38936]: I0216 21:36:31.848851 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdw5j\" (UniqueName: \"kubernetes.io/projected/6946c62d-ccec-4c64-bd64-d660f22d7d7a-kube-api-access-xdw5j\") pod \"keystone-db-create-kjwf8\" (UID: \"6946c62d-ccec-4c64-bd64-d660f22d7d7a\") " pod="openstack/keystone-db-create-kjwf8" Feb 16 21:36:31.870290 master-0 kubenswrapper[38936]: I0216 21:36:31.870240 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-48b3-account-create-update-jsqjk"] Feb 16 21:36:31.874328 master-0 kubenswrapper[38936]: I0216 21:36:31.874283 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-48b3-account-create-update-jsqjk" Feb 16 21:36:31.882949 master-0 kubenswrapper[38936]: I0216 21:36:31.882808 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 21:36:31.917886 master-0 kubenswrapper[38936]: I0216 21:36:31.915366 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-48b3-account-create-update-jsqjk"] Feb 16 21:36:31.926668 master-0 kubenswrapper[38936]: I0216 21:36:31.926151 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95733bba-8f7b-460f-8793-e85c3f7066c3-operator-scripts\") pod \"keystone-85e2-account-create-update-xh6dm\" (UID: \"95733bba-8f7b-460f-8793-e85c3f7066c3\") " pod="openstack/keystone-85e2-account-create-update-xh6dm" Feb 16 21:36:31.926668 master-0 kubenswrapper[38936]: I0216 21:36:31.926519 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9mv4\" (UniqueName: \"kubernetes.io/projected/95733bba-8f7b-460f-8793-e85c3f7066c3-kube-api-access-j9mv4\") pod \"keystone-85e2-account-create-update-xh6dm\" (UID: \"95733bba-8f7b-460f-8793-e85c3f7066c3\") " pod="openstack/keystone-85e2-account-create-update-xh6dm" Feb 16 21:36:31.928793 master-0 kubenswrapper[38936]: I0216 21:36:31.926964 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95733bba-8f7b-460f-8793-e85c3f7066c3-operator-scripts\") pod \"keystone-85e2-account-create-update-xh6dm\" (UID: \"95733bba-8f7b-460f-8793-e85c3f7066c3\") " pod="openstack/keystone-85e2-account-create-update-xh6dm" Feb 16 21:36:31.948585 master-0 kubenswrapper[38936]: I0216 21:36:31.948379 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9mv4\" (UniqueName: \"kubernetes.io/projected/95733bba-8f7b-460f-8793-e85c3f7066c3-kube-api-access-j9mv4\") pod \"keystone-85e2-account-create-update-xh6dm\" (UID: \"95733bba-8f7b-460f-8793-e85c3f7066c3\") " pod="openstack/keystone-85e2-account-create-update-xh6dm" Feb 16 21:36:32.030688 master-0 kubenswrapper[38936]: I0216 21:36:32.029521 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af961d-92d8-476f-9df5-91a49b295543-operator-scripts\") pod \"placement-db-create-cvnf4\" (UID: \"77af961d-92d8-476f-9df5-91a49b295543\") " pod="openstack/placement-db-create-cvnf4" Feb 16 21:36:32.030688 master-0 kubenswrapper[38936]: I0216 21:36:32.029625 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f6rf\" (UniqueName: \"kubernetes.io/projected/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-kube-api-access-7f6rf\") pod \"placement-48b3-account-create-update-jsqjk\" (UID: \"c39bb875-9e10-497c-b1d1-c7cd8a76a92d\") " pod="openstack/placement-48b3-account-create-update-jsqjk" Feb 16 21:36:32.030688 master-0 kubenswrapper[38936]: I0216 21:36:32.029699 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcv9f\" (UniqueName: \"kubernetes.io/projected/77af961d-92d8-476f-9df5-91a49b295543-kube-api-access-gcv9f\") pod \"placement-db-create-cvnf4\" (UID: \"77af961d-92d8-476f-9df5-91a49b295543\") " pod="openstack/placement-db-create-cvnf4" Feb 16 21:36:32.030688 master-0 kubenswrapper[38936]: I0216 21:36:32.029739 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-operator-scripts\") pod \"placement-48b3-account-create-update-jsqjk\" (UID: \"c39bb875-9e10-497c-b1d1-c7cd8a76a92d\") " pod="openstack/placement-48b3-account-create-update-jsqjk" Feb 16 21:36:32.040692 master-0 kubenswrapper[38936]: I0216 21:36:32.040109 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d442-account-create-update-p2dfg" Feb 16 21:36:32.062591 master-0 kubenswrapper[38936]: I0216 21:36:32.062555 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kjwf8" Feb 16 21:36:32.100380 master-0 kubenswrapper[38936]: I0216 21:36:32.100320 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-85e2-account-create-update-xh6dm" Feb 16 21:36:32.131596 master-0 kubenswrapper[38936]: I0216 21:36:32.131532 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af961d-92d8-476f-9df5-91a49b295543-operator-scripts\") pod \"placement-db-create-cvnf4\" (UID: \"77af961d-92d8-476f-9df5-91a49b295543\") " pod="openstack/placement-db-create-cvnf4" Feb 16 21:36:32.131872 master-0 kubenswrapper[38936]: I0216 21:36:32.131755 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f6rf\" (UniqueName: \"kubernetes.io/projected/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-kube-api-access-7f6rf\") pod \"placement-48b3-account-create-update-jsqjk\" (UID: \"c39bb875-9e10-497c-b1d1-c7cd8a76a92d\") " pod="openstack/placement-48b3-account-create-update-jsqjk" Feb 16 21:36:32.131914 master-0 kubenswrapper[38936]: I0216 21:36:32.131880 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcv9f\" (UniqueName: \"kubernetes.io/projected/77af961d-92d8-476f-9df5-91a49b295543-kube-api-access-gcv9f\") pod \"placement-db-create-cvnf4\" (UID: \"77af961d-92d8-476f-9df5-91a49b295543\") " pod="openstack/placement-db-create-cvnf4" Feb 16 21:36:32.131955 master-0 kubenswrapper[38936]: I0216 21:36:32.131934 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-operator-scripts\") pod \"placement-48b3-account-create-update-jsqjk\" (UID: \"c39bb875-9e10-497c-b1d1-c7cd8a76a92d\") " pod="openstack/placement-48b3-account-create-update-jsqjk" Feb 16 21:36:32.133252 master-0 kubenswrapper[38936]: I0216 21:36:32.133202 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af961d-92d8-476f-9df5-91a49b295543-operator-scripts\") pod \"placement-db-create-cvnf4\" (UID: \"77af961d-92d8-476f-9df5-91a49b295543\") " pod="openstack/placement-db-create-cvnf4" Feb 16 21:36:32.133311 master-0 kubenswrapper[38936]: I0216 21:36:32.133263 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-operator-scripts\") pod \"placement-48b3-account-create-update-jsqjk\" (UID: \"c39bb875-9e10-497c-b1d1-c7cd8a76a92d\") " pod="openstack/placement-48b3-account-create-update-jsqjk" Feb 16 21:36:32.155344 master-0 kubenswrapper[38936]: I0216 21:36:32.155306 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f6rf\" (UniqueName: \"kubernetes.io/projected/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-kube-api-access-7f6rf\") pod \"placement-48b3-account-create-update-jsqjk\" (UID: \"c39bb875-9e10-497c-b1d1-c7cd8a76a92d\") " pod="openstack/placement-48b3-account-create-update-jsqjk" Feb 16 21:36:32.155807 master-0 kubenswrapper[38936]: I0216 21:36:32.155755 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcv9f\" (UniqueName: \"kubernetes.io/projected/77af961d-92d8-476f-9df5-91a49b295543-kube-api-access-gcv9f\") pod \"placement-db-create-cvnf4\" (UID: \"77af961d-92d8-476f-9df5-91a49b295543\") " pod="openstack/placement-db-create-cvnf4" Feb 16 21:36:32.184544 master-0 kubenswrapper[38936]: I0216 21:36:32.184486 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cvnf4" Feb 16 21:36:32.233617 master-0 kubenswrapper[38936]: I0216 21:36:32.233562 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-48b3-account-create-update-jsqjk" Feb 16 21:36:32.286678 master-0 kubenswrapper[38936]: I0216 21:36:32.236388 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:32.286678 master-0 kubenswrapper[38936]: E0216 21:36:32.236580 38936 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 21:36:32.286678 master-0 kubenswrapper[38936]: E0216 21:36:32.236609 38936 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 21:36:32.286678 master-0 kubenswrapper[38936]: E0216 21:36:32.236687 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift podName:73f726c9-e2b2-4038-a202-5df2ede23bf5 nodeName:}" failed. No retries permitted until 2026-02-16 21:36:40.236664824 +0000 UTC m=+830.588668186 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift") pod "swift-storage-0" (UID: "73f726c9-e2b2-4038-a202-5df2ede23bf5") : configmap "swift-ring-files" not found Feb 16 21:36:32.313790 master-0 kubenswrapper[38936]: I0216 21:36:32.313711 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r2xtw"] Feb 16 21:36:32.334990 master-0 kubenswrapper[38936]: W0216 21:36:32.334938 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff2be0c4_7a42_48ef_801c_8b4422008927.slice/crio-06ce740029465eabeb755f2d9342608354a40c2c498a50deec7f8709aa8d442e WatchSource:0}: Error finding container 06ce740029465eabeb755f2d9342608354a40c2c498a50deec7f8709aa8d442e: Status 404 returned error can't find the container with id 06ce740029465eabeb755f2d9342608354a40c2c498a50deec7f8709aa8d442e Feb 16 21:36:32.548347 master-0 kubenswrapper[38936]: W0216 21:36:32.548285 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda25b7436_4c82_45df_9e60_d2ceec4544f8.slice/crio-3852a6ef66c4a2636546678d456ef77ceeb230ef3da0a7bee02935ab8410eb72 WatchSource:0}: Error finding container 3852a6ef66c4a2636546678d456ef77ceeb230ef3da0a7bee02935ab8410eb72: Status 404 returned error can't find the container with id 3852a6ef66c4a2636546678d456ef77ceeb230ef3da0a7bee02935ab8410eb72 Feb 16 21:36:32.553415 master-0 kubenswrapper[38936]: I0216 21:36:32.553347 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d442-account-create-update-p2dfg"] Feb 16 21:36:32.587022 master-0 kubenswrapper[38936]: I0216 21:36:32.586853 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l6dz5" event={"ID":"67cebf05-d1da-4a45-aef9-8366546424a5","Type":"ContainerStarted","Data":"d15331ca9850674a2a9c184b58eab8698bedd67436836fec0d75cf51fc3293c2"} Feb 16 21:36:32.593133 master-0 kubenswrapper[38936]: I0216 21:36:32.593084 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r2xtw" event={"ID":"ff2be0c4-7a42-48ef-801c-8b4422008927","Type":"ContainerStarted","Data":"2da361135b59fea8df18e0415d446a40e11961b7858e0037b42f135271ff399f"} Feb 16 21:36:32.593369 master-0 kubenswrapper[38936]: I0216 21:36:32.593139 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r2xtw" event={"ID":"ff2be0c4-7a42-48ef-801c-8b4422008927","Type":"ContainerStarted","Data":"06ce740029465eabeb755f2d9342608354a40c2c498a50deec7f8709aa8d442e"} Feb 16 21:36:32.597809 master-0 kubenswrapper[38936]: I0216 21:36:32.597527 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d442-account-create-update-p2dfg" event={"ID":"a25b7436-4c82-45df-9e60-d2ceec4544f8","Type":"ContainerStarted","Data":"3852a6ef66c4a2636546678d456ef77ceeb230ef3da0a7bee02935ab8410eb72"} Feb 16 21:36:32.622601 master-0 kubenswrapper[38936]: I0216 21:36:32.621961 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-l6dz5" podStartSLOduration=2.351139055 podStartE2EDuration="7.621940661s" podCreationTimestamp="2026-02-16 21:36:25 +0000 UTC" firstStartedPulling="2026-02-16 21:36:26.344169014 +0000 UTC m=+816.696172416" lastFinishedPulling="2026-02-16 21:36:31.61497066 +0000 UTC m=+821.966974022" observedRunningTime="2026-02-16 21:36:32.610922863 +0000 UTC m=+822.962926235" watchObservedRunningTime="2026-02-16 21:36:32.621940661 +0000 UTC m=+822.973944013" Feb 16 21:36:32.632136 master-0 kubenswrapper[38936]: I0216 21:36:32.632004 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-r2xtw" podStartSLOduration=2.631950981 podStartE2EDuration="2.631950981s" podCreationTimestamp="2026-02-16 21:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:36:32.627692346 +0000 UTC m=+822.979695708" watchObservedRunningTime="2026-02-16 21:36:32.631950981 +0000 UTC m=+822.983954343" Feb 16 21:36:32.730396 master-0 kubenswrapper[38936]: I0216 21:36:32.726821 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:36:32.755015 master-0 kubenswrapper[38936]: I0216 21:36:32.754914 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kjwf8"] Feb 16 21:36:32.820769 master-0 kubenswrapper[38936]: I0216 21:36:32.820688 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-5fq4v"] Feb 16 21:36:32.820980 master-0 kubenswrapper[38936]: I0216 21:36:32.820949 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" podUID="ce0bed6a-2010-497e-bea2-c4c4d493300e" containerName="dnsmasq-dns" containerID="cri-o://8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c" gracePeriod=10 Feb 16 21:36:32.880048 master-0 kubenswrapper[38936]: I0216 21:36:32.864828 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-85e2-account-create-update-xh6dm"] Feb 16 21:36:32.994723 master-0 kubenswrapper[38936]: I0216 21:36:32.993949 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-cvnf4"] Feb 16 21:36:33.035171 master-0 kubenswrapper[38936]: W0216 21:36:33.035112 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77af961d_92d8_476f_9df5_91a49b295543.slice/crio-0cfa660d2d56a6a7c797b82175df9e59597857a3adec73a252f17f87e3be8bcc WatchSource:0}: Error finding container 0cfa660d2d56a6a7c797b82175df9e59597857a3adec73a252f17f87e3be8bcc: Status 404 returned error can't find the container with id 0cfa660d2d56a6a7c797b82175df9e59597857a3adec73a252f17f87e3be8bcc Feb 16 21:36:33.047675 master-0 kubenswrapper[38936]: I0216 21:36:33.043048 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-48b3-account-create-update-jsqjk"] Feb 16 21:36:33.051982 master-0 kubenswrapper[38936]: W0216 21:36:33.051703 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc39bb875_9e10_497c_b1d1_c7cd8a76a92d.slice/crio-b0f999f54cdae45aa2581e45b538fa41d6a3cb46f45ddb1faf906e0f007fdc2b WatchSource:0}: Error finding container b0f999f54cdae45aa2581e45b538fa41d6a3cb46f45ddb1faf906e0f007fdc2b: Status 404 returned error can't find the container with id b0f999f54cdae45aa2581e45b538fa41d6a3cb46f45ddb1faf906e0f007fdc2b Feb 16 21:36:33.454551 master-0 kubenswrapper[38936]: I0216 21:36:33.454497 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:36:33.481459 master-0 kubenswrapper[38936]: I0216 21:36:33.481405 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-config\") pod \"ce0bed6a-2010-497e-bea2-c4c4d493300e\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " Feb 16 21:36:33.481970 master-0 kubenswrapper[38936]: I0216 21:36:33.481940 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fljgp\" (UniqueName: \"kubernetes.io/projected/ce0bed6a-2010-497e-bea2-c4c4d493300e-kube-api-access-fljgp\") pod \"ce0bed6a-2010-497e-bea2-c4c4d493300e\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " Feb 16 21:36:33.482028 master-0 kubenswrapper[38936]: I0216 21:36:33.481996 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-dns-svc\") pod \"ce0bed6a-2010-497e-bea2-c4c4d493300e\" (UID: \"ce0bed6a-2010-497e-bea2-c4c4d493300e\") " Feb 16 21:36:33.499720 master-0 kubenswrapper[38936]: I0216 21:36:33.489851 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce0bed6a-2010-497e-bea2-c4c4d493300e-kube-api-access-fljgp" (OuterVolumeSpecName: "kube-api-access-fljgp") pod "ce0bed6a-2010-497e-bea2-c4c4d493300e" (UID: "ce0bed6a-2010-497e-bea2-c4c4d493300e"). InnerVolumeSpecName "kube-api-access-fljgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:33.499720 master-0 kubenswrapper[38936]: I0216 21:36:33.490879 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fljgp\" (UniqueName: \"kubernetes.io/projected/ce0bed6a-2010-497e-bea2-c4c4d493300e-kube-api-access-fljgp\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:33.565404 master-0 kubenswrapper[38936]: I0216 21:36:33.564872 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce0bed6a-2010-497e-bea2-c4c4d493300e" (UID: "ce0bed6a-2010-497e-bea2-c4c4d493300e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:33.577728 master-0 kubenswrapper[38936]: I0216 21:36:33.575053 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-config" (OuterVolumeSpecName: "config") pod "ce0bed6a-2010-497e-bea2-c4c4d493300e" (UID: "ce0bed6a-2010-497e-bea2-c4c4d493300e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:33.593435 master-0 kubenswrapper[38936]: I0216 21:36:33.593088 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:33.593435 master-0 kubenswrapper[38936]: I0216 21:36:33.593150 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce0bed6a-2010-497e-bea2-c4c4d493300e-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:33.613465 master-0 kubenswrapper[38936]: I0216 21:36:33.613401 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-85e2-account-create-update-xh6dm" event={"ID":"95733bba-8f7b-460f-8793-e85c3f7066c3","Type":"ContainerStarted","Data":"9afcfe6dcde02181ae5aea5521b38a55eb9c68240f836c1eefa95e31ab8baaaa"} Feb 16 21:36:33.613465 master-0 kubenswrapper[38936]: I0216 21:36:33.613461 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-85e2-account-create-update-xh6dm" event={"ID":"95733bba-8f7b-460f-8793-e85c3f7066c3","Type":"ContainerStarted","Data":"39a352f9a1ce9e2bf9c3b156f44e91ef29001e4e0449b5d43f41c0efef6d1ed3"} Feb 16 21:36:33.614514 master-0 kubenswrapper[38936]: I0216 21:36:33.614474 38936 generic.go:334] "Generic (PLEG): container finished" podID="ff2be0c4-7a42-48ef-801c-8b4422008927" containerID="2da361135b59fea8df18e0415d446a40e11961b7858e0037b42f135271ff399f" exitCode=0 Feb 16 21:36:33.614583 master-0 kubenswrapper[38936]: I0216 21:36:33.614552 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r2xtw" event={"ID":"ff2be0c4-7a42-48ef-801c-8b4422008927","Type":"ContainerDied","Data":"2da361135b59fea8df18e0415d446a40e11961b7858e0037b42f135271ff399f"} Feb 16 21:36:33.617442 master-0 kubenswrapper[38936]: I0216 21:36:33.616162 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-48b3-account-create-update-jsqjk" event={"ID":"c39bb875-9e10-497c-b1d1-c7cd8a76a92d","Type":"ContainerStarted","Data":"ff03077b5bcbe65962804a18b0183ea96a18af8f6c2af8c6cfd9bd03221680c1"} Feb 16 21:36:33.617442 master-0 kubenswrapper[38936]: I0216 21:36:33.616200 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-48b3-account-create-update-jsqjk" event={"ID":"c39bb875-9e10-497c-b1d1-c7cd8a76a92d","Type":"ContainerStarted","Data":"b0f999f54cdae45aa2581e45b538fa41d6a3cb46f45ddb1faf906e0f007fdc2b"} Feb 16 21:36:33.622135 master-0 kubenswrapper[38936]: I0216 21:36:33.621320 38936 generic.go:334] "Generic (PLEG): container finished" podID="6946c62d-ccec-4c64-bd64-d660f22d7d7a" containerID="88f43a1d16f1afe0b7c207e1cab890ac3b2c092ce5e187444b4f7b7297a23d89" exitCode=0 Feb 16 21:36:33.622135 master-0 kubenswrapper[38936]: I0216 21:36:33.621392 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kjwf8" event={"ID":"6946c62d-ccec-4c64-bd64-d660f22d7d7a","Type":"ContainerDied","Data":"88f43a1d16f1afe0b7c207e1cab890ac3b2c092ce5e187444b4f7b7297a23d89"} Feb 16 21:36:33.622135 master-0 kubenswrapper[38936]: I0216 21:36:33.621419 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kjwf8" event={"ID":"6946c62d-ccec-4c64-bd64-d660f22d7d7a","Type":"ContainerStarted","Data":"45f8b220d7913f67e1b5870c8614fca84d5a4c22325464d89ff520a6ee74bd48"} Feb 16 21:36:33.623097 master-0 kubenswrapper[38936]: I0216 21:36:33.623063 38936 generic.go:334] "Generic (PLEG): container finished" podID="a25b7436-4c82-45df-9e60-d2ceec4544f8" containerID="e64159a4c0d0f5fd8f4e525fc4ed277fac207a765258f1cb65f005e31dba5c3b" exitCode=0 Feb 16 21:36:33.623181 master-0 kubenswrapper[38936]: I0216 21:36:33.623119 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d442-account-create-update-p2dfg" event={"ID":"a25b7436-4c82-45df-9e60-d2ceec4544f8","Type":"ContainerDied","Data":"e64159a4c0d0f5fd8f4e525fc4ed277fac207a765258f1cb65f005e31dba5c3b"} Feb 16 21:36:33.629345 master-0 kubenswrapper[38936]: I0216 21:36:33.629283 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cvnf4" event={"ID":"77af961d-92d8-476f-9df5-91a49b295543","Type":"ContainerStarted","Data":"cb4fc74793d636ba4f4fe5b44befb93c41dc97582abb6be22555369a5070bf23"} Feb 16 21:36:33.629522 master-0 kubenswrapper[38936]: I0216 21:36:33.629364 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cvnf4" event={"ID":"77af961d-92d8-476f-9df5-91a49b295543","Type":"ContainerStarted","Data":"0cfa660d2d56a6a7c797b82175df9e59597857a3adec73a252f17f87e3be8bcc"} Feb 16 21:36:33.631162 master-0 kubenswrapper[38936]: I0216 21:36:33.631116 38936 generic.go:334] "Generic (PLEG): container finished" podID="ce0bed6a-2010-497e-bea2-c4c4d493300e" containerID="8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c" exitCode=0 Feb 16 21:36:33.631228 master-0 kubenswrapper[38936]: I0216 21:36:33.631198 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" Feb 16 21:36:33.631702 master-0 kubenswrapper[38936]: I0216 21:36:33.631254 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" event={"ID":"ce0bed6a-2010-497e-bea2-c4c4d493300e","Type":"ContainerDied","Data":"8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c"} Feb 16 21:36:33.631702 master-0 kubenswrapper[38936]: I0216 21:36:33.631304 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-5fq4v" event={"ID":"ce0bed6a-2010-497e-bea2-c4c4d493300e","Type":"ContainerDied","Data":"cd3bc3b3fec64c5ee7affa6f4ffd76c8b16d3df738f0bce04a4bf3cd7bbcf239"} Feb 16 21:36:33.631702 master-0 kubenswrapper[38936]: I0216 21:36:33.631330 38936 scope.go:117] "RemoveContainer" containerID="8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c" Feb 16 21:36:33.654887 master-0 kubenswrapper[38936]: I0216 21:36:33.654759 38936 scope.go:117] "RemoveContainer" containerID="4c03d45899d0ef8ac972adac04b08007f8928be406bc46ed1b285acbf94b9328" Feb 16 21:36:33.657195 master-0 kubenswrapper[38936]: I0216 21:36:33.657118 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-85e2-account-create-update-xh6dm" podStartSLOduration=2.6571026140000003 podStartE2EDuration="2.657102614s" podCreationTimestamp="2026-02-16 21:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:36:33.653159427 +0000 UTC m=+824.005162809" watchObservedRunningTime="2026-02-16 21:36:33.657102614 +0000 UTC m=+824.009105976" Feb 16 21:36:33.714845 master-0 kubenswrapper[38936]: I0216 21:36:33.714795 38936 scope.go:117] "RemoveContainer" containerID="8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c" Feb 16 21:36:33.715565 master-0 kubenswrapper[38936]: E0216 21:36:33.715541 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c\": container with ID starting with 8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c not found: ID does not exist" containerID="8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c" Feb 16 21:36:33.715839 master-0 kubenswrapper[38936]: I0216 21:36:33.715803 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c"} err="failed to get container status \"8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c\": rpc error: code = NotFound desc = could not find container \"8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c\": container with ID starting with 8e917e848b63feb9575ae911ed8c3b4bb163301d9f35534ca4b556cc670a6f0c not found: ID does not exist" Feb 16 21:36:33.715927 master-0 kubenswrapper[38936]: I0216 21:36:33.715914 38936 scope.go:117] "RemoveContainer" containerID="4c03d45899d0ef8ac972adac04b08007f8928be406bc46ed1b285acbf94b9328" Feb 16 21:36:33.716348 master-0 kubenswrapper[38936]: E0216 21:36:33.716327 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c03d45899d0ef8ac972adac04b08007f8928be406bc46ed1b285acbf94b9328\": container with ID starting with 4c03d45899d0ef8ac972adac04b08007f8928be406bc46ed1b285acbf94b9328 not found: ID does not exist" containerID="4c03d45899d0ef8ac972adac04b08007f8928be406bc46ed1b285acbf94b9328" Feb 16 21:36:33.716464 master-0 kubenswrapper[38936]: I0216 21:36:33.716439 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c03d45899d0ef8ac972adac04b08007f8928be406bc46ed1b285acbf94b9328"} err="failed to get container status \"4c03d45899d0ef8ac972adac04b08007f8928be406bc46ed1b285acbf94b9328\": rpc error: code = NotFound desc = could not find container \"4c03d45899d0ef8ac972adac04b08007f8928be406bc46ed1b285acbf94b9328\": container with ID starting with 4c03d45899d0ef8ac972adac04b08007f8928be406bc46ed1b285acbf94b9328 not found: ID does not exist" Feb 16 21:36:33.774063 master-0 kubenswrapper[38936]: I0216 21:36:33.773787 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-5fq4v"] Feb 16 21:36:33.788677 master-0 kubenswrapper[38936]: I0216 21:36:33.788067 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-5fq4v"] Feb 16 21:36:33.792395 master-0 kubenswrapper[38936]: I0216 21:36:33.789984 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-48b3-account-create-update-jsqjk" podStartSLOduration=2.789962782 podStartE2EDuration="2.789962782s" podCreationTimestamp="2026-02-16 21:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:36:33.779155181 +0000 UTC m=+824.131158553" watchObservedRunningTime="2026-02-16 21:36:33.789962782 +0000 UTC m=+824.141966144" Feb 16 21:36:33.810473 master-0 kubenswrapper[38936]: I0216 21:36:33.810370 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-cvnf4" podStartSLOduration=2.810348313 podStartE2EDuration="2.810348313s" podCreationTimestamp="2026-02-16 21:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:36:33.804843964 +0000 UTC m=+824.156847326" watchObservedRunningTime="2026-02-16 21:36:33.810348313 +0000 UTC m=+824.162351675" Feb 16 21:36:33.833392 master-0 kubenswrapper[38936]: I0216 21:36:33.833330 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6cmqp"] Feb 16 21:36:33.845262 master-0 kubenswrapper[38936]: I0216 21:36:33.845184 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6cmqp"] Feb 16 21:36:33.886279 master-0 kubenswrapper[38936]: I0216 21:36:33.886219 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0afbc19b-fbcc-43e9-907f-e819b5865ee6" path="/var/lib/kubelet/pods/0afbc19b-fbcc-43e9-907f-e819b5865ee6/volumes" Feb 16 21:36:33.886951 master-0 kubenswrapper[38936]: I0216 21:36:33.886921 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce0bed6a-2010-497e-bea2-c4c4d493300e" path="/var/lib/kubelet/pods/ce0bed6a-2010-497e-bea2-c4c4d493300e/volumes" Feb 16 21:36:34.643844 master-0 kubenswrapper[38936]: I0216 21:36:34.643783 38936 generic.go:334] "Generic (PLEG): container finished" podID="77af961d-92d8-476f-9df5-91a49b295543" containerID="cb4fc74793d636ba4f4fe5b44befb93c41dc97582abb6be22555369a5070bf23" exitCode=0 Feb 16 21:36:34.644611 master-0 kubenswrapper[38936]: I0216 21:36:34.644553 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cvnf4" event={"ID":"77af961d-92d8-476f-9df5-91a49b295543","Type":"ContainerDied","Data":"cb4fc74793d636ba4f4fe5b44befb93c41dc97582abb6be22555369a5070bf23"} Feb 16 21:36:34.647046 master-0 kubenswrapper[38936]: I0216 21:36:34.647006 38936 generic.go:334] "Generic (PLEG): container finished" podID="95733bba-8f7b-460f-8793-e85c3f7066c3" containerID="9afcfe6dcde02181ae5aea5521b38a55eb9c68240f836c1eefa95e31ab8baaaa" exitCode=0 Feb 16 21:36:34.647150 master-0 kubenswrapper[38936]: I0216 21:36:34.647075 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-85e2-account-create-update-xh6dm" event={"ID":"95733bba-8f7b-460f-8793-e85c3f7066c3","Type":"ContainerDied","Data":"9afcfe6dcde02181ae5aea5521b38a55eb9c68240f836c1eefa95e31ab8baaaa"} Feb 16 21:36:34.649409 master-0 kubenswrapper[38936]: I0216 21:36:34.649371 38936 generic.go:334] "Generic (PLEG): container finished" podID="c39bb875-9e10-497c-b1d1-c7cd8a76a92d" containerID="ff03077b5bcbe65962804a18b0183ea96a18af8f6c2af8c6cfd9bd03221680c1" exitCode=0 Feb 16 21:36:34.649496 master-0 kubenswrapper[38936]: I0216 21:36:34.649421 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-48b3-account-create-update-jsqjk" event={"ID":"c39bb875-9e10-497c-b1d1-c7cd8a76a92d","Type":"ContainerDied","Data":"ff03077b5bcbe65962804a18b0183ea96a18af8f6c2af8c6cfd9bd03221680c1"} Feb 16 21:36:35.253222 master-0 kubenswrapper[38936]: I0216 21:36:35.252574 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kjwf8" Feb 16 21:36:35.333743 master-0 kubenswrapper[38936]: I0216 21:36:35.333576 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdw5j\" (UniqueName: \"kubernetes.io/projected/6946c62d-ccec-4c64-bd64-d660f22d7d7a-kube-api-access-xdw5j\") pod \"6946c62d-ccec-4c64-bd64-d660f22d7d7a\" (UID: \"6946c62d-ccec-4c64-bd64-d660f22d7d7a\") " Feb 16 21:36:35.334002 master-0 kubenswrapper[38936]: I0216 21:36:35.333795 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6946c62d-ccec-4c64-bd64-d660f22d7d7a-operator-scripts\") pod \"6946c62d-ccec-4c64-bd64-d660f22d7d7a\" (UID: \"6946c62d-ccec-4c64-bd64-d660f22d7d7a\") " Feb 16 21:36:35.334587 master-0 kubenswrapper[38936]: I0216 21:36:35.334540 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6946c62d-ccec-4c64-bd64-d660f22d7d7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6946c62d-ccec-4c64-bd64-d660f22d7d7a" (UID: "6946c62d-ccec-4c64-bd64-d660f22d7d7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:35.335691 master-0 kubenswrapper[38936]: I0216 21:36:35.335662 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6946c62d-ccec-4c64-bd64-d660f22d7d7a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:35.336355 master-0 kubenswrapper[38936]: I0216 21:36:35.336320 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6946c62d-ccec-4c64-bd64-d660f22d7d7a-kube-api-access-xdw5j" (OuterVolumeSpecName: "kube-api-access-xdw5j") pod "6946c62d-ccec-4c64-bd64-d660f22d7d7a" (UID: "6946c62d-ccec-4c64-bd64-d660f22d7d7a"). InnerVolumeSpecName "kube-api-access-xdw5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:35.438410 master-0 kubenswrapper[38936]: I0216 21:36:35.438290 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdw5j\" (UniqueName: \"kubernetes.io/projected/6946c62d-ccec-4c64-bd64-d660f22d7d7a-kube-api-access-xdw5j\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:35.478816 master-0 kubenswrapper[38936]: I0216 21:36:35.478766 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r2xtw" Feb 16 21:36:35.484004 master-0 kubenswrapper[38936]: I0216 21:36:35.483039 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d442-account-create-update-p2dfg" Feb 16 21:36:35.539680 master-0 kubenswrapper[38936]: I0216 21:36:35.538870 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fn64\" (UniqueName: \"kubernetes.io/projected/a25b7436-4c82-45df-9e60-d2ceec4544f8-kube-api-access-8fn64\") pod \"a25b7436-4c82-45df-9e60-d2ceec4544f8\" (UID: \"a25b7436-4c82-45df-9e60-d2ceec4544f8\") " Feb 16 21:36:35.539680 master-0 kubenswrapper[38936]: I0216 21:36:35.538970 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff2be0c4-7a42-48ef-801c-8b4422008927-operator-scripts\") pod \"ff2be0c4-7a42-48ef-801c-8b4422008927\" (UID: \"ff2be0c4-7a42-48ef-801c-8b4422008927\") " Feb 16 21:36:35.539680 master-0 kubenswrapper[38936]: I0216 21:36:35.538995 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf7jm\" (UniqueName: \"kubernetes.io/projected/ff2be0c4-7a42-48ef-801c-8b4422008927-kube-api-access-rf7jm\") pod \"ff2be0c4-7a42-48ef-801c-8b4422008927\" (UID: \"ff2be0c4-7a42-48ef-801c-8b4422008927\") " Feb 16 21:36:35.539680 master-0 kubenswrapper[38936]: I0216 21:36:35.539028 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25b7436-4c82-45df-9e60-d2ceec4544f8-operator-scripts\") pod \"a25b7436-4c82-45df-9e60-d2ceec4544f8\" (UID: \"a25b7436-4c82-45df-9e60-d2ceec4544f8\") " Feb 16 21:36:35.540439 master-0 kubenswrapper[38936]: I0216 21:36:35.540124 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25b7436-4c82-45df-9e60-d2ceec4544f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a25b7436-4c82-45df-9e60-d2ceec4544f8" (UID: "a25b7436-4c82-45df-9e60-d2ceec4544f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:35.540981 master-0 kubenswrapper[38936]: I0216 21:36:35.540930 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff2be0c4-7a42-48ef-801c-8b4422008927-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff2be0c4-7a42-48ef-801c-8b4422008927" (UID: "ff2be0c4-7a42-48ef-801c-8b4422008927"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:35.544328 master-0 kubenswrapper[38936]: I0216 21:36:35.544298 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25b7436-4c82-45df-9e60-d2ceec4544f8-kube-api-access-8fn64" (OuterVolumeSpecName: "kube-api-access-8fn64") pod "a25b7436-4c82-45df-9e60-d2ceec4544f8" (UID: "a25b7436-4c82-45df-9e60-d2ceec4544f8"). InnerVolumeSpecName "kube-api-access-8fn64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:35.545975 master-0 kubenswrapper[38936]: I0216 21:36:35.545912 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff2be0c4-7a42-48ef-801c-8b4422008927-kube-api-access-rf7jm" (OuterVolumeSpecName: "kube-api-access-rf7jm") pod "ff2be0c4-7a42-48ef-801c-8b4422008927" (UID: "ff2be0c4-7a42-48ef-801c-8b4422008927"). InnerVolumeSpecName "kube-api-access-rf7jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:35.641259 master-0 kubenswrapper[38936]: I0216 21:36:35.641087 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fn64\" (UniqueName: \"kubernetes.io/projected/a25b7436-4c82-45df-9e60-d2ceec4544f8-kube-api-access-8fn64\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:35.641259 master-0 kubenswrapper[38936]: I0216 21:36:35.641138 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff2be0c4-7a42-48ef-801c-8b4422008927-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:35.641259 master-0 kubenswrapper[38936]: I0216 21:36:35.641152 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf7jm\" (UniqueName: \"kubernetes.io/projected/ff2be0c4-7a42-48ef-801c-8b4422008927-kube-api-access-rf7jm\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:35.641259 master-0 kubenswrapper[38936]: I0216 21:36:35.641165 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25b7436-4c82-45df-9e60-d2ceec4544f8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:35.663392 master-0 kubenswrapper[38936]: I0216 21:36:35.663200 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kjwf8" event={"ID":"6946c62d-ccec-4c64-bd64-d660f22d7d7a","Type":"ContainerDied","Data":"45f8b220d7913f67e1b5870c8614fca84d5a4c22325464d89ff520a6ee74bd48"} Feb 16 21:36:35.663392 master-0 kubenswrapper[38936]: I0216 21:36:35.663250 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45f8b220d7913f67e1b5870c8614fca84d5a4c22325464d89ff520a6ee74bd48" Feb 16 21:36:35.663392 master-0 kubenswrapper[38936]: I0216 21:36:35.663314 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kjwf8" Feb 16 21:36:35.672166 master-0 kubenswrapper[38936]: I0216 21:36:35.672115 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d442-account-create-update-p2dfg" event={"ID":"a25b7436-4c82-45df-9e60-d2ceec4544f8","Type":"ContainerDied","Data":"3852a6ef66c4a2636546678d456ef77ceeb230ef3da0a7bee02935ab8410eb72"} Feb 16 21:36:35.672269 master-0 kubenswrapper[38936]: I0216 21:36:35.672172 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3852a6ef66c4a2636546678d456ef77ceeb230ef3da0a7bee02935ab8410eb72" Feb 16 21:36:35.672269 master-0 kubenswrapper[38936]: I0216 21:36:35.672139 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d442-account-create-update-p2dfg" Feb 16 21:36:35.677953 master-0 kubenswrapper[38936]: I0216 21:36:35.673518 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r2xtw" Feb 16 21:36:35.677953 master-0 kubenswrapper[38936]: I0216 21:36:35.673530 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r2xtw" event={"ID":"ff2be0c4-7a42-48ef-801c-8b4422008927","Type":"ContainerDied","Data":"06ce740029465eabeb755f2d9342608354a40c2c498a50deec7f8709aa8d442e"} Feb 16 21:36:35.677953 master-0 kubenswrapper[38936]: I0216 21:36:35.673562 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06ce740029465eabeb755f2d9342608354a40c2c498a50deec7f8709aa8d442e" Feb 16 21:36:36.290335 master-0 kubenswrapper[38936]: I0216 21:36:36.290172 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cvnf4" Feb 16 21:36:36.371410 master-0 kubenswrapper[38936]: I0216 21:36:36.370492 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af961d-92d8-476f-9df5-91a49b295543-operator-scripts\") pod \"77af961d-92d8-476f-9df5-91a49b295543\" (UID: \"77af961d-92d8-476f-9df5-91a49b295543\") " Feb 16 21:36:36.371410 master-0 kubenswrapper[38936]: I0216 21:36:36.370613 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcv9f\" (UniqueName: \"kubernetes.io/projected/77af961d-92d8-476f-9df5-91a49b295543-kube-api-access-gcv9f\") pod \"77af961d-92d8-476f-9df5-91a49b295543\" (UID: \"77af961d-92d8-476f-9df5-91a49b295543\") " Feb 16 21:36:36.376736 master-0 kubenswrapper[38936]: I0216 21:36:36.376669 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77af961d-92d8-476f-9df5-91a49b295543-kube-api-access-gcv9f" (OuterVolumeSpecName: "kube-api-access-gcv9f") pod "77af961d-92d8-476f-9df5-91a49b295543" (UID: "77af961d-92d8-476f-9df5-91a49b295543"). InnerVolumeSpecName "kube-api-access-gcv9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:36.377168 master-0 kubenswrapper[38936]: I0216 21:36:36.377139 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77af961d-92d8-476f-9df5-91a49b295543-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77af961d-92d8-476f-9df5-91a49b295543" (UID: "77af961d-92d8-476f-9df5-91a49b295543"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:36.474690 master-0 kubenswrapper[38936]: I0216 21:36:36.474523 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcv9f\" (UniqueName: \"kubernetes.io/projected/77af961d-92d8-476f-9df5-91a49b295543-kube-api-access-gcv9f\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:36.474823 master-0 kubenswrapper[38936]: I0216 21:36:36.474716 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77af961d-92d8-476f-9df5-91a49b295543-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:36.487418 master-0 kubenswrapper[38936]: I0216 21:36:36.487352 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-85e2-account-create-update-xh6dm" Feb 16 21:36:36.502680 master-0 kubenswrapper[38936]: I0216 21:36:36.502615 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-48b3-account-create-update-jsqjk" Feb 16 21:36:36.576098 master-0 kubenswrapper[38936]: I0216 21:36:36.576044 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-operator-scripts\") pod \"c39bb875-9e10-497c-b1d1-c7cd8a76a92d\" (UID: \"c39bb875-9e10-497c-b1d1-c7cd8a76a92d\") " Feb 16 21:36:36.576434 master-0 kubenswrapper[38936]: I0216 21:36:36.576175 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9mv4\" (UniqueName: \"kubernetes.io/projected/95733bba-8f7b-460f-8793-e85c3f7066c3-kube-api-access-j9mv4\") pod \"95733bba-8f7b-460f-8793-e85c3f7066c3\" (UID: \"95733bba-8f7b-460f-8793-e85c3f7066c3\") " Feb 16 21:36:36.576434 master-0 kubenswrapper[38936]: I0216 21:36:36.576255 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f6rf\" (UniqueName: \"kubernetes.io/projected/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-kube-api-access-7f6rf\") pod \"c39bb875-9e10-497c-b1d1-c7cd8a76a92d\" (UID: \"c39bb875-9e10-497c-b1d1-c7cd8a76a92d\") " Feb 16 21:36:36.576434 master-0 kubenswrapper[38936]: I0216 21:36:36.576355 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95733bba-8f7b-460f-8793-e85c3f7066c3-operator-scripts\") pod \"95733bba-8f7b-460f-8793-e85c3f7066c3\" (UID: \"95733bba-8f7b-460f-8793-e85c3f7066c3\") " Feb 16 21:36:36.577381 master-0 kubenswrapper[38936]: I0216 21:36:36.577350 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95733bba-8f7b-460f-8793-e85c3f7066c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95733bba-8f7b-460f-8793-e85c3f7066c3" (UID: "95733bba-8f7b-460f-8793-e85c3f7066c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:36.577822 master-0 kubenswrapper[38936]: I0216 21:36:36.577791 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c39bb875-9e10-497c-b1d1-c7cd8a76a92d" (UID: "c39bb875-9e10-497c-b1d1-c7cd8a76a92d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:36.581103 master-0 kubenswrapper[38936]: I0216 21:36:36.581058 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-kube-api-access-7f6rf" (OuterVolumeSpecName: "kube-api-access-7f6rf") pod "c39bb875-9e10-497c-b1d1-c7cd8a76a92d" (UID: "c39bb875-9e10-497c-b1d1-c7cd8a76a92d"). InnerVolumeSpecName "kube-api-access-7f6rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:36.584774 master-0 kubenswrapper[38936]: I0216 21:36:36.583878 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95733bba-8f7b-460f-8793-e85c3f7066c3-kube-api-access-j9mv4" (OuterVolumeSpecName: "kube-api-access-j9mv4") pod "95733bba-8f7b-460f-8793-e85c3f7066c3" (UID: "95733bba-8f7b-460f-8793-e85c3f7066c3"). InnerVolumeSpecName "kube-api-access-j9mv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:36.679637 master-0 kubenswrapper[38936]: I0216 21:36:36.678514 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f6rf\" (UniqueName: \"kubernetes.io/projected/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-kube-api-access-7f6rf\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:36.679637 master-0 kubenswrapper[38936]: I0216 21:36:36.678570 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95733bba-8f7b-460f-8793-e85c3f7066c3-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:36.679637 master-0 kubenswrapper[38936]: I0216 21:36:36.678586 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c39bb875-9e10-497c-b1d1-c7cd8a76a92d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:36.679637 master-0 kubenswrapper[38936]: I0216 21:36:36.678599 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9mv4\" (UniqueName: \"kubernetes.io/projected/95733bba-8f7b-460f-8793-e85c3f7066c3-kube-api-access-j9mv4\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:36.685551 master-0 kubenswrapper[38936]: I0216 21:36:36.685467 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-85e2-account-create-update-xh6dm" event={"ID":"95733bba-8f7b-460f-8793-e85c3f7066c3","Type":"ContainerDied","Data":"39a352f9a1ce9e2bf9c3b156f44e91ef29001e4e0449b5d43f41c0efef6d1ed3"} Feb 16 21:36:36.685551 master-0 kubenswrapper[38936]: I0216 21:36:36.685546 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39a352f9a1ce9e2bf9c3b156f44e91ef29001e4e0449b5d43f41c0efef6d1ed3" Feb 16 21:36:36.685777 master-0 kubenswrapper[38936]: I0216 21:36:36.685570 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-85e2-account-create-update-xh6dm" Feb 16 21:36:36.687928 master-0 kubenswrapper[38936]: I0216 21:36:36.687865 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-48b3-account-create-update-jsqjk" event={"ID":"c39bb875-9e10-497c-b1d1-c7cd8a76a92d","Type":"ContainerDied","Data":"b0f999f54cdae45aa2581e45b538fa41d6a3cb46f45ddb1faf906e0f007fdc2b"} Feb 16 21:36:36.687928 master-0 kubenswrapper[38936]: I0216 21:36:36.687926 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0f999f54cdae45aa2581e45b538fa41d6a3cb46f45ddb1faf906e0f007fdc2b" Feb 16 21:36:36.688075 master-0 kubenswrapper[38936]: I0216 21:36:36.687940 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-48b3-account-create-update-jsqjk" Feb 16 21:36:36.689438 master-0 kubenswrapper[38936]: I0216 21:36:36.689387 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-cvnf4" event={"ID":"77af961d-92d8-476f-9df5-91a49b295543","Type":"ContainerDied","Data":"0cfa660d2d56a6a7c797b82175df9e59597857a3adec73a252f17f87e3be8bcc"} Feb 16 21:36:36.689438 master-0 kubenswrapper[38936]: I0216 21:36:36.689431 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cfa660d2d56a6a7c797b82175df9e59597857a3adec73a252f17f87e3be8bcc" Feb 16 21:36:36.689567 master-0 kubenswrapper[38936]: I0216 21:36:36.689440 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-cvnf4" Feb 16 21:36:37.432401 master-0 kubenswrapper[38936]: I0216 21:36:37.432095 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-rl5nw"] Feb 16 21:36:37.432934 master-0 kubenswrapper[38936]: E0216 21:36:37.432801 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25b7436-4c82-45df-9e60-d2ceec4544f8" containerName="mariadb-account-create-update" Feb 16 21:36:37.432934 master-0 kubenswrapper[38936]: I0216 21:36:37.432823 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25b7436-4c82-45df-9e60-d2ceec4544f8" containerName="mariadb-account-create-update" Feb 16 21:36:37.432934 master-0 kubenswrapper[38936]: E0216 21:36:37.432857 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af961d-92d8-476f-9df5-91a49b295543" containerName="mariadb-database-create" Feb 16 21:36:37.432934 master-0 kubenswrapper[38936]: I0216 21:36:37.432884 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af961d-92d8-476f-9df5-91a49b295543" containerName="mariadb-database-create" Feb 16 21:36:37.432934 master-0 kubenswrapper[38936]: E0216 21:36:37.432898 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce0bed6a-2010-497e-bea2-c4c4d493300e" containerName="init" Feb 16 21:36:37.432934 master-0 kubenswrapper[38936]: I0216 21:36:37.432909 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce0bed6a-2010-497e-bea2-c4c4d493300e" containerName="init" Feb 16 21:36:37.432934 master-0 kubenswrapper[38936]: E0216 21:36:37.432923 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff2be0c4-7a42-48ef-801c-8b4422008927" containerName="mariadb-database-create" Feb 16 21:36:37.432934 master-0 kubenswrapper[38936]: I0216 21:36:37.432930 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff2be0c4-7a42-48ef-801c-8b4422008927" containerName="mariadb-database-create" Feb 16 21:36:37.433311 master-0 kubenswrapper[38936]: E0216 21:36:37.432964 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39bb875-9e10-497c-b1d1-c7cd8a76a92d" containerName="mariadb-account-create-update" Feb 16 21:36:37.433311 master-0 kubenswrapper[38936]: I0216 21:36:37.432973 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39bb875-9e10-497c-b1d1-c7cd8a76a92d" containerName="mariadb-account-create-update" Feb 16 21:36:37.433311 master-0 kubenswrapper[38936]: E0216 21:36:37.432992 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6946c62d-ccec-4c64-bd64-d660f22d7d7a" containerName="mariadb-database-create" Feb 16 21:36:37.433311 master-0 kubenswrapper[38936]: I0216 21:36:37.433000 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="6946c62d-ccec-4c64-bd64-d660f22d7d7a" containerName="mariadb-database-create" Feb 16 21:36:37.433311 master-0 kubenswrapper[38936]: E0216 21:36:37.433015 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95733bba-8f7b-460f-8793-e85c3f7066c3" containerName="mariadb-account-create-update" Feb 16 21:36:37.433311 master-0 kubenswrapper[38936]: I0216 21:36:37.433021 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="95733bba-8f7b-460f-8793-e85c3f7066c3" containerName="mariadb-account-create-update" Feb 16 21:36:37.433311 master-0 kubenswrapper[38936]: E0216 21:36:37.433034 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce0bed6a-2010-497e-bea2-c4c4d493300e" containerName="dnsmasq-dns" Feb 16 21:36:37.433311 master-0 kubenswrapper[38936]: I0216 21:36:37.433041 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce0bed6a-2010-497e-bea2-c4c4d493300e" containerName="dnsmasq-dns" Feb 16 21:36:37.433575 master-0 kubenswrapper[38936]: I0216 21:36:37.433338 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39bb875-9e10-497c-b1d1-c7cd8a76a92d" containerName="mariadb-account-create-update" Feb 16 21:36:37.433575 master-0 kubenswrapper[38936]: I0216 21:36:37.433385 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="6946c62d-ccec-4c64-bd64-d660f22d7d7a" containerName="mariadb-database-create" Feb 16 21:36:37.433575 master-0 kubenswrapper[38936]: I0216 21:36:37.433403 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="95733bba-8f7b-460f-8793-e85c3f7066c3" containerName="mariadb-account-create-update" Feb 16 21:36:37.433575 master-0 kubenswrapper[38936]: I0216 21:36:37.433413 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25b7436-4c82-45df-9e60-d2ceec4544f8" containerName="mariadb-account-create-update" Feb 16 21:36:37.433575 master-0 kubenswrapper[38936]: I0216 21:36:37.433430 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce0bed6a-2010-497e-bea2-c4c4d493300e" containerName="dnsmasq-dns" Feb 16 21:36:37.433575 master-0 kubenswrapper[38936]: I0216 21:36:37.433476 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="77af961d-92d8-476f-9df5-91a49b295543" containerName="mariadb-database-create" Feb 16 21:36:37.433575 master-0 kubenswrapper[38936]: I0216 21:36:37.433494 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff2be0c4-7a42-48ef-801c-8b4422008927" containerName="mariadb-database-create" Feb 16 21:36:37.434461 master-0 kubenswrapper[38936]: I0216 21:36:37.434415 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rl5nw" Feb 16 21:36:37.436927 master-0 kubenswrapper[38936]: I0216 21:36:37.436867 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 21:36:37.496471 master-0 kubenswrapper[38936]: I0216 21:36:37.496397 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdpg5\" (UniqueName: \"kubernetes.io/projected/8cb39cf4-45bc-414a-aab6-08992ddaeb12-kube-api-access-gdpg5\") pod \"root-account-create-update-rl5nw\" (UID: \"8cb39cf4-45bc-414a-aab6-08992ddaeb12\") " pod="openstack/root-account-create-update-rl5nw" Feb 16 21:36:37.496702 master-0 kubenswrapper[38936]: I0216 21:36:37.496481 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cb39cf4-45bc-414a-aab6-08992ddaeb12-operator-scripts\") pod \"root-account-create-update-rl5nw\" (UID: \"8cb39cf4-45bc-414a-aab6-08992ddaeb12\") " pod="openstack/root-account-create-update-rl5nw" Feb 16 21:36:37.504217 master-0 kubenswrapper[38936]: I0216 21:36:37.504119 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rl5nw"] Feb 16 21:36:37.598195 master-0 kubenswrapper[38936]: I0216 21:36:37.598104 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdpg5\" (UniqueName: \"kubernetes.io/projected/8cb39cf4-45bc-414a-aab6-08992ddaeb12-kube-api-access-gdpg5\") pod \"root-account-create-update-rl5nw\" (UID: \"8cb39cf4-45bc-414a-aab6-08992ddaeb12\") " pod="openstack/root-account-create-update-rl5nw" Feb 16 21:36:37.598195 master-0 kubenswrapper[38936]: I0216 21:36:37.598201 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cb39cf4-45bc-414a-aab6-08992ddaeb12-operator-scripts\") pod \"root-account-create-update-rl5nw\" (UID: \"8cb39cf4-45bc-414a-aab6-08992ddaeb12\") " pod="openstack/root-account-create-update-rl5nw" Feb 16 21:36:37.599413 master-0 kubenswrapper[38936]: I0216 21:36:37.599370 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cb39cf4-45bc-414a-aab6-08992ddaeb12-operator-scripts\") pod \"root-account-create-update-rl5nw\" (UID: \"8cb39cf4-45bc-414a-aab6-08992ddaeb12\") " pod="openstack/root-account-create-update-rl5nw" Feb 16 21:36:37.669381 master-0 kubenswrapper[38936]: I0216 21:36:37.669308 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdpg5\" (UniqueName: \"kubernetes.io/projected/8cb39cf4-45bc-414a-aab6-08992ddaeb12-kube-api-access-gdpg5\") pod \"root-account-create-update-rl5nw\" (UID: \"8cb39cf4-45bc-414a-aab6-08992ddaeb12\") " pod="openstack/root-account-create-update-rl5nw" Feb 16 21:36:37.757093 master-0 kubenswrapper[38936]: I0216 21:36:37.754999 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rl5nw" Feb 16 21:36:38.301454 master-0 kubenswrapper[38936]: I0216 21:36:38.301384 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-rl5nw"] Feb 16 21:36:38.303806 master-0 kubenswrapper[38936]: W0216 21:36:38.303753 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cb39cf4_45bc_414a_aab6_08992ddaeb12.slice/crio-30370d53b725523a35dac05c0dca2cbc364e601d63cd6286e02223b25dedc017 WatchSource:0}: Error finding container 30370d53b725523a35dac05c0dca2cbc364e601d63cd6286e02223b25dedc017: Status 404 returned error can't find the container with id 30370d53b725523a35dac05c0dca2cbc364e601d63cd6286e02223b25dedc017 Feb 16 21:36:38.719564 master-0 kubenswrapper[38936]: I0216 21:36:38.719431 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rl5nw" event={"ID":"8cb39cf4-45bc-414a-aab6-08992ddaeb12","Type":"ContainerStarted","Data":"4933899af8057b08b10fcdcf90edb599ecef52e406afa529dd623f56950f9e05"} Feb 16 21:36:38.719834 master-0 kubenswrapper[38936]: I0216 21:36:38.719816 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rl5nw" event={"ID":"8cb39cf4-45bc-414a-aab6-08992ddaeb12","Type":"ContainerStarted","Data":"30370d53b725523a35dac05c0dca2cbc364e601d63cd6286e02223b25dedc017"} Feb 16 21:36:38.746814 master-0 kubenswrapper[38936]: I0216 21:36:38.746720 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-rl5nw" podStartSLOduration=1.746698355 podStartE2EDuration="1.746698355s" podCreationTimestamp="2026-02-16 21:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:36:38.737614659 +0000 UTC m=+829.089618021" watchObservedRunningTime="2026-02-16 21:36:38.746698355 +0000 UTC m=+829.098701717" Feb 16 21:36:39.239581 master-0 kubenswrapper[38936]: I0216 21:36:39.239537 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 21:36:39.731011 master-0 kubenswrapper[38936]: I0216 21:36:39.730960 38936 generic.go:334] "Generic (PLEG): container finished" podID="8cb39cf4-45bc-414a-aab6-08992ddaeb12" containerID="4933899af8057b08b10fcdcf90edb599ecef52e406afa529dd623f56950f9e05" exitCode=0 Feb 16 21:36:39.731140 master-0 kubenswrapper[38936]: I0216 21:36:39.731013 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rl5nw" event={"ID":"8cb39cf4-45bc-414a-aab6-08992ddaeb12","Type":"ContainerDied","Data":"4933899af8057b08b10fcdcf90edb599ecef52e406afa529dd623f56950f9e05"} Feb 16 21:36:40.265586 master-0 kubenswrapper[38936]: I0216 21:36:40.265512 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:40.270395 master-0 kubenswrapper[38936]: I0216 21:36:40.270336 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/73f726c9-e2b2-4038-a202-5df2ede23bf5-etc-swift\") pod \"swift-storage-0\" (UID: \"73f726c9-e2b2-4038-a202-5df2ede23bf5\") " pod="openstack/swift-storage-0" Feb 16 21:36:40.290536 master-0 kubenswrapper[38936]: I0216 21:36:40.290474 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 21:36:40.749561 master-0 kubenswrapper[38936]: I0216 21:36:40.749436 38936 generic.go:334] "Generic (PLEG): container finished" podID="67cebf05-d1da-4a45-aef9-8366546424a5" containerID="d15331ca9850674a2a9c184b58eab8698bedd67436836fec0d75cf51fc3293c2" exitCode=0 Feb 16 21:36:40.749561 master-0 kubenswrapper[38936]: I0216 21:36:40.749526 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l6dz5" event={"ID":"67cebf05-d1da-4a45-aef9-8366546424a5","Type":"ContainerDied","Data":"d15331ca9850674a2a9c184b58eab8698bedd67436836fec0d75cf51fc3293c2"} Feb 16 21:36:40.750945 master-0 kubenswrapper[38936]: I0216 21:36:40.750900 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 21:36:40.763586 master-0 kubenswrapper[38936]: W0216 21:36:40.763218 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73f726c9_e2b2_4038_a202_5df2ede23bf5.slice/crio-8806e999940eea64c4db38efb53c57ebb5f49dbcf8aa3af210de919c2dd8b14b WatchSource:0}: Error finding container 8806e999940eea64c4db38efb53c57ebb5f49dbcf8aa3af210de919c2dd8b14b: Status 404 returned error can't find the container with id 8806e999940eea64c4db38efb53c57ebb5f49dbcf8aa3af210de919c2dd8b14b Feb 16 21:36:41.229716 master-0 kubenswrapper[38936]: I0216 21:36:41.229664 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rl5nw" Feb 16 21:36:41.394896 master-0 kubenswrapper[38936]: I0216 21:36:41.394812 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdpg5\" (UniqueName: \"kubernetes.io/projected/8cb39cf4-45bc-414a-aab6-08992ddaeb12-kube-api-access-gdpg5\") pod \"8cb39cf4-45bc-414a-aab6-08992ddaeb12\" (UID: \"8cb39cf4-45bc-414a-aab6-08992ddaeb12\") " Feb 16 21:36:41.395639 master-0 kubenswrapper[38936]: I0216 21:36:41.395142 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cb39cf4-45bc-414a-aab6-08992ddaeb12-operator-scripts\") pod \"8cb39cf4-45bc-414a-aab6-08992ddaeb12\" (UID: \"8cb39cf4-45bc-414a-aab6-08992ddaeb12\") " Feb 16 21:36:41.396000 master-0 kubenswrapper[38936]: I0216 21:36:41.395944 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cb39cf4-45bc-414a-aab6-08992ddaeb12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8cb39cf4-45bc-414a-aab6-08992ddaeb12" (UID: "8cb39cf4-45bc-414a-aab6-08992ddaeb12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:41.398716 master-0 kubenswrapper[38936]: I0216 21:36:41.398454 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb39cf4-45bc-414a-aab6-08992ddaeb12-kube-api-access-gdpg5" (OuterVolumeSpecName: "kube-api-access-gdpg5") pod "8cb39cf4-45bc-414a-aab6-08992ddaeb12" (UID: "8cb39cf4-45bc-414a-aab6-08992ddaeb12"). InnerVolumeSpecName "kube-api-access-gdpg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:41.498020 master-0 kubenswrapper[38936]: I0216 21:36:41.497970 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8cb39cf4-45bc-414a-aab6-08992ddaeb12-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:41.498020 master-0 kubenswrapper[38936]: I0216 21:36:41.498013 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdpg5\" (UniqueName: \"kubernetes.io/projected/8cb39cf4-45bc-414a-aab6-08992ddaeb12-kube-api-access-gdpg5\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:41.589066 master-0 kubenswrapper[38936]: I0216 21:36:41.589011 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-hfz86"] Feb 16 21:36:41.589701 master-0 kubenswrapper[38936]: E0216 21:36:41.589646 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb39cf4-45bc-414a-aab6-08992ddaeb12" containerName="mariadb-account-create-update" Feb 16 21:36:41.589701 master-0 kubenswrapper[38936]: I0216 21:36:41.589702 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb39cf4-45bc-414a-aab6-08992ddaeb12" containerName="mariadb-account-create-update" Feb 16 21:36:41.590094 master-0 kubenswrapper[38936]: I0216 21:36:41.590063 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb39cf4-45bc-414a-aab6-08992ddaeb12" containerName="mariadb-account-create-update" Feb 16 21:36:41.591040 master-0 kubenswrapper[38936]: I0216 21:36:41.591015 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.609810 master-0 kubenswrapper[38936]: I0216 21:36:41.601320 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-1d7ec-config-data" Feb 16 21:36:41.615270 master-0 kubenswrapper[38936]: I0216 21:36:41.615202 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hfz86"] Feb 16 21:36:41.703114 master-0 kubenswrapper[38936]: I0216 21:36:41.702949 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-db-sync-config-data\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.703114 master-0 kubenswrapper[38936]: I0216 21:36:41.703040 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-combined-ca-bundle\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.703405 master-0 kubenswrapper[38936]: I0216 21:36:41.703155 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-config-data\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.703405 master-0 kubenswrapper[38936]: I0216 21:36:41.703240 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtsm\" (UniqueName: \"kubernetes.io/projected/4bb312b5-f96b-4689-9d30-c4c878aae0ec-kube-api-access-bhtsm\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.761054 master-0 kubenswrapper[38936]: I0216 21:36:41.760988 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-rl5nw" event={"ID":"8cb39cf4-45bc-414a-aab6-08992ddaeb12","Type":"ContainerDied","Data":"30370d53b725523a35dac05c0dca2cbc364e601d63cd6286e02223b25dedc017"} Feb 16 21:36:41.761054 master-0 kubenswrapper[38936]: I0216 21:36:41.761018 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-rl5nw" Feb 16 21:36:41.761054 master-0 kubenswrapper[38936]: I0216 21:36:41.761057 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30370d53b725523a35dac05c0dca2cbc364e601d63cd6286e02223b25dedc017" Feb 16 21:36:41.763992 master-0 kubenswrapper[38936]: I0216 21:36:41.763945 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"8806e999940eea64c4db38efb53c57ebb5f49dbcf8aa3af210de919c2dd8b14b"} Feb 16 21:36:41.805170 master-0 kubenswrapper[38936]: I0216 21:36:41.805110 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-db-sync-config-data\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.805391 master-0 kubenswrapper[38936]: I0216 21:36:41.805193 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-combined-ca-bundle\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.805391 master-0 kubenswrapper[38936]: I0216 21:36:41.805286 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-config-data\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.805391 master-0 kubenswrapper[38936]: I0216 21:36:41.805330 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhtsm\" (UniqueName: \"kubernetes.io/projected/4bb312b5-f96b-4689-9d30-c4c878aae0ec-kube-api-access-bhtsm\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.809904 master-0 kubenswrapper[38936]: I0216 21:36:41.809840 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-db-sync-config-data\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.812070 master-0 kubenswrapper[38936]: I0216 21:36:41.810367 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-config-data\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.820463 master-0 kubenswrapper[38936]: I0216 21:36:41.820354 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-combined-ca-bundle\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.837154 master-0 kubenswrapper[38936]: I0216 21:36:41.834167 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhtsm\" (UniqueName: \"kubernetes.io/projected/4bb312b5-f96b-4689-9d30-c4c878aae0ec-kube-api-access-bhtsm\") pod \"glance-db-sync-hfz86\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:41.926531 master-0 kubenswrapper[38936]: I0216 21:36:41.926460 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hfz86" Feb 16 21:36:42.434163 master-0 kubenswrapper[38936]: I0216 21:36:42.434069 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:42.552320 master-0 kubenswrapper[38936]: I0216 21:36:42.550400 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-scripts\") pod \"67cebf05-d1da-4a45-aef9-8366546424a5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " Feb 16 21:36:42.552522 master-0 kubenswrapper[38936]: I0216 21:36:42.552362 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-dispersionconf\") pod \"67cebf05-d1da-4a45-aef9-8366546424a5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " Feb 16 21:36:42.552522 master-0 kubenswrapper[38936]: I0216 21:36:42.552402 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc76p\" (UniqueName: \"kubernetes.io/projected/67cebf05-d1da-4a45-aef9-8366546424a5-kube-api-access-bc76p\") pod \"67cebf05-d1da-4a45-aef9-8366546424a5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " Feb 16 21:36:42.552522 master-0 kubenswrapper[38936]: I0216 21:36:42.552434 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/67cebf05-d1da-4a45-aef9-8366546424a5-etc-swift\") pod \"67cebf05-d1da-4a45-aef9-8366546424a5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " Feb 16 21:36:42.552522 master-0 kubenswrapper[38936]: I0216 21:36:42.552467 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-swiftconf\") pod \"67cebf05-d1da-4a45-aef9-8366546424a5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " Feb 16 21:36:42.552867 master-0 kubenswrapper[38936]: I0216 21:36:42.552538 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-ring-data-devices\") pod \"67cebf05-d1da-4a45-aef9-8366546424a5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " Feb 16 21:36:42.552867 master-0 kubenswrapper[38936]: I0216 21:36:42.552580 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-combined-ca-bundle\") pod \"67cebf05-d1da-4a45-aef9-8366546424a5\" (UID: \"67cebf05-d1da-4a45-aef9-8366546424a5\") " Feb 16 21:36:42.554608 master-0 kubenswrapper[38936]: I0216 21:36:42.554160 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67cebf05-d1da-4a45-aef9-8366546424a5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "67cebf05-d1da-4a45-aef9-8366546424a5" (UID: "67cebf05-d1da-4a45-aef9-8366546424a5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:36:42.559520 master-0 kubenswrapper[38936]: I0216 21:36:42.559478 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cebf05-d1da-4a45-aef9-8366546424a5-kube-api-access-bc76p" (OuterVolumeSpecName: "kube-api-access-bc76p") pod "67cebf05-d1da-4a45-aef9-8366546424a5" (UID: "67cebf05-d1da-4a45-aef9-8366546424a5"). InnerVolumeSpecName "kube-api-access-bc76p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:42.563238 master-0 kubenswrapper[38936]: I0216 21:36:42.562831 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "67cebf05-d1da-4a45-aef9-8366546424a5" (UID: "67cebf05-d1da-4a45-aef9-8366546424a5"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:36:42.574456 master-0 kubenswrapper[38936]: I0216 21:36:42.574264 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "67cebf05-d1da-4a45-aef9-8366546424a5" (UID: "67cebf05-d1da-4a45-aef9-8366546424a5"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:42.582331 master-0 kubenswrapper[38936]: I0216 21:36:42.582258 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67cebf05-d1da-4a45-aef9-8366546424a5" (UID: "67cebf05-d1da-4a45-aef9-8366546424a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:36:42.583555 master-0 kubenswrapper[38936]: I0216 21:36:42.582886 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "67cebf05-d1da-4a45-aef9-8366546424a5" (UID: "67cebf05-d1da-4a45-aef9-8366546424a5"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:36:42.584482 master-0 kubenswrapper[38936]: I0216 21:36:42.584258 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-scripts" (OuterVolumeSpecName: "scripts") pod "67cebf05-d1da-4a45-aef9-8366546424a5" (UID: "67cebf05-d1da-4a45-aef9-8366546424a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:42.632583 master-0 kubenswrapper[38936]: I0216 21:36:42.632509 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hfz86"] Feb 16 21:36:42.655224 master-0 kubenswrapper[38936]: I0216 21:36:42.655174 38936 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-dispersionconf\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:42.655224 master-0 kubenswrapper[38936]: I0216 21:36:42.655216 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc76p\" (UniqueName: \"kubernetes.io/projected/67cebf05-d1da-4a45-aef9-8366546424a5-kube-api-access-bc76p\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:42.655224 master-0 kubenswrapper[38936]: I0216 21:36:42.655231 38936 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/67cebf05-d1da-4a45-aef9-8366546424a5-etc-swift\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:42.655445 master-0 kubenswrapper[38936]: I0216 21:36:42.655243 38936 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-swiftconf\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:42.655445 master-0 kubenswrapper[38936]: I0216 21:36:42.655252 38936 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:42.655445 master-0 kubenswrapper[38936]: I0216 21:36:42.655261 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67cebf05-d1da-4a45-aef9-8366546424a5-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:42.655445 master-0 kubenswrapper[38936]: I0216 21:36:42.655271 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67cebf05-d1da-4a45-aef9-8366546424a5-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:42.785295 master-0 kubenswrapper[38936]: I0216 21:36:42.783362 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"53a846ed7e7009ddee49b5751e33761e47db81e92054a16e9fb1c36d5c90a319"} Feb 16 21:36:42.785295 master-0 kubenswrapper[38936]: I0216 21:36:42.783422 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"15def18139eafd35049c1a109e99b986104fc0b6c82e845e9986b3eb20b902cd"} Feb 16 21:36:42.785295 master-0 kubenswrapper[38936]: I0216 21:36:42.783439 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"ea404cd7766ac1932aade4ecfcf62d8814c554f0bcf8429bf35a16d253fff1f9"} Feb 16 21:36:42.787032 master-0 kubenswrapper[38936]: I0216 21:36:42.786995 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hfz86" event={"ID":"4bb312b5-f96b-4689-9d30-c4c878aae0ec","Type":"ContainerStarted","Data":"8a200353739402babbd860da2ee26e01658b6b58d49556b186b9f52e9ed4596b"} Feb 16 21:36:42.788760 master-0 kubenswrapper[38936]: I0216 21:36:42.788413 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l6dz5" event={"ID":"67cebf05-d1da-4a45-aef9-8366546424a5","Type":"ContainerDied","Data":"9608fd4cdd9720abae097c638d582b75c139e3ba28cd24ddff4290f364a31e4d"} Feb 16 21:36:42.788760 master-0 kubenswrapper[38936]: I0216 21:36:42.788440 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9608fd4cdd9720abae097c638d582b75c139e3ba28cd24ddff4290f364a31e4d" Feb 16 21:36:42.788760 master-0 kubenswrapper[38936]: I0216 21:36:42.788492 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l6dz5" Feb 16 21:36:43.833177 master-0 kubenswrapper[38936]: I0216 21:36:43.832876 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"726758d9cffab72e7a6f76550c87e5c74045ea9582f5fa09a96a68dd2eae1b9c"} Feb 16 21:36:43.911238 master-0 kubenswrapper[38936]: I0216 21:36:43.911152 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-rl5nw"] Feb 16 21:36:43.921696 master-0 kubenswrapper[38936]: I0216 21:36:43.921624 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-rl5nw"] Feb 16 21:36:44.856428 master-0 kubenswrapper[38936]: I0216 21:36:44.856283 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"4d4ca6c52c7b5989ceadd6731cec68a09058938a75b94411e709c4531a752a97"} Feb 16 21:36:44.856428 master-0 kubenswrapper[38936]: I0216 21:36:44.856354 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"70982e6db18becd1666eb48408c42fb288bbc4c625f0da93eb31e67fe3b5a035"} Feb 16 21:36:45.872247 master-0 kubenswrapper[38936]: I0216 21:36:45.872171 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"3ad7a507521de880bc7a68fbda26280cf45c287b37c0cd80abb50c238f8f8ccc"} Feb 16 21:36:45.872247 master-0 kubenswrapper[38936]: I0216 21:36:45.872228 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"f5e145b8da3f52dde7464ca252ad2b30016ea219a8b74d5be58753dba3af39d4"} Feb 16 21:36:45.888366 master-0 kubenswrapper[38936]: I0216 21:36:45.888318 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb39cf4-45bc-414a-aab6-08992ddaeb12" path="/var/lib/kubelet/pods/8cb39cf4-45bc-414a-aab6-08992ddaeb12/volumes" Feb 16 21:36:46.205494 master-0 kubenswrapper[38936]: I0216 21:36:46.205371 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zr5cs" podUID="018762db-2c9f-40c4-b05a-52df963c4376" containerName="ovn-controller" probeResult="failure" output=< Feb 16 21:36:46.205494 master-0 kubenswrapper[38936]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 21:36:46.205494 master-0 kubenswrapper[38936]: > Feb 16 21:36:46.224073 master-0 kubenswrapper[38936]: I0216 21:36:46.224018 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:36:46.238370 master-0 kubenswrapper[38936]: I0216 21:36:46.238316 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lhsv6" Feb 16 21:36:46.490801 master-0 kubenswrapper[38936]: I0216 21:36:46.488612 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zr5cs-config-2lpkf"] Feb 16 21:36:46.490801 master-0 kubenswrapper[38936]: E0216 21:36:46.489643 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67cebf05-d1da-4a45-aef9-8366546424a5" containerName="swift-ring-rebalance" Feb 16 21:36:46.490801 master-0 kubenswrapper[38936]: I0216 21:36:46.489684 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="67cebf05-d1da-4a45-aef9-8366546424a5" containerName="swift-ring-rebalance" Feb 16 21:36:46.490801 master-0 kubenswrapper[38936]: I0216 21:36:46.490062 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="67cebf05-d1da-4a45-aef9-8366546424a5" containerName="swift-ring-rebalance" Feb 16 21:36:46.494148 master-0 kubenswrapper[38936]: I0216 21:36:46.491195 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.496409 master-0 kubenswrapper[38936]: I0216 21:36:46.496350 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 21:36:46.515127 master-0 kubenswrapper[38936]: I0216 21:36:46.515067 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zr5cs-config-2lpkf"] Feb 16 21:36:46.572520 master-0 kubenswrapper[38936]: I0216 21:36:46.572449 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.573005 master-0 kubenswrapper[38936]: I0216 21:36:46.572976 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgsx9\" (UniqueName: \"kubernetes.io/projected/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-kube-api-access-fgsx9\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.573162 master-0 kubenswrapper[38936]: I0216 21:36:46.573144 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-additional-scripts\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.573278 master-0 kubenswrapper[38936]: I0216 21:36:46.573262 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-scripts\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.573469 master-0 kubenswrapper[38936]: I0216 21:36:46.573447 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-log-ovn\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.573738 master-0 kubenswrapper[38936]: I0216 21:36:46.573621 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run-ovn\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.677851 master-0 kubenswrapper[38936]: I0216 21:36:46.675906 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-additional-scripts\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.677851 master-0 kubenswrapper[38936]: I0216 21:36:46.676013 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-scripts\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.677851 master-0 kubenswrapper[38936]: I0216 21:36:46.676115 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-log-ovn\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.677851 master-0 kubenswrapper[38936]: I0216 21:36:46.676178 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run-ovn\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.677851 master-0 kubenswrapper[38936]: I0216 21:36:46.676229 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.677851 master-0 kubenswrapper[38936]: I0216 21:36:46.676283 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgsx9\" (UniqueName: \"kubernetes.io/projected/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-kube-api-access-fgsx9\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.677851 master-0 kubenswrapper[38936]: I0216 21:36:46.677728 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-additional-scripts\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.677851 master-0 kubenswrapper[38936]: I0216 21:36:46.677845 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-log-ovn\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.678300 master-0 kubenswrapper[38936]: I0216 21:36:46.677924 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run-ovn\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.678300 master-0 kubenswrapper[38936]: I0216 21:36:46.678017 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.679825 master-0 kubenswrapper[38936]: I0216 21:36:46.679779 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-scripts\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.700988 master-0 kubenswrapper[38936]: I0216 21:36:46.700907 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgsx9\" (UniqueName: \"kubernetes.io/projected/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-kube-api-access-fgsx9\") pod \"ovn-controller-zr5cs-config-2lpkf\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:46.888785 master-0 kubenswrapper[38936]: I0216 21:36:46.888712 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"473f9680197b9356e4c461d30b8cc4bdf66a366fffc36f1060588695d379fcd0"} Feb 16 21:36:46.891021 master-0 kubenswrapper[38936]: I0216 21:36:46.890954 38936 generic.go:334] "Generic (PLEG): container finished" podID="56ed148e-f9e4-4547-ad45-227bd66edcfa" containerID="7e1442e9078f9e4108bfa18733871e5acbe1612df9b3ecc81a6faeab79b3d453" exitCode=0 Feb 16 21:36:46.891100 master-0 kubenswrapper[38936]: I0216 21:36:46.890987 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"56ed148e-f9e4-4547-ad45-227bd66edcfa","Type":"ContainerDied","Data":"7e1442e9078f9e4108bfa18733871e5acbe1612df9b3ecc81a6faeab79b3d453"} Feb 16 21:36:46.928358 master-0 kubenswrapper[38936]: I0216 21:36:46.928231 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:47.440569 master-0 kubenswrapper[38936]: I0216 21:36:47.440474 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zr5cs-config-2lpkf"] Feb 16 21:36:47.902985 master-0 kubenswrapper[38936]: I0216 21:36:47.902923 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zr5cs-config-2lpkf" event={"ID":"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d","Type":"ContainerStarted","Data":"3873e418fbe1888dda88c8ae062427acb57798ef601e34b15ae1d295adf9215f"} Feb 16 21:36:47.903557 master-0 kubenswrapper[38936]: I0216 21:36:47.902997 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zr5cs-config-2lpkf" event={"ID":"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d","Type":"ContainerStarted","Data":"589ed86cb16b9864664d167325986b60a29a8cf911e35378f2485dc4f9a52ee5"} Feb 16 21:36:47.916064 master-0 kubenswrapper[38936]: I0216 21:36:47.916001 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"c7e4f58bdaa8ec7d12880139f874ae04658417f498143f611b31dfccb190e701"} Feb 16 21:36:47.916064 master-0 kubenswrapper[38936]: I0216 21:36:47.916072 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"5896a1a811662bf422c3c7fd65a6dc6515e2faa6d91bdf6430718baa88f6be8b"} Feb 16 21:36:47.916357 master-0 kubenswrapper[38936]: I0216 21:36:47.916084 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"36868e847528f230e17900f6ff34ae4c0296c7f31dfd1697428355fc2f9ab82e"} Feb 16 21:36:47.916357 master-0 kubenswrapper[38936]: I0216 21:36:47.916094 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"df452f50af030dbef79627e7ef465ad19a56884ab45d3af24861c6a19336bda7"} Feb 16 21:36:47.918853 master-0 kubenswrapper[38936]: I0216 21:36:47.918809 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"56ed148e-f9e4-4547-ad45-227bd66edcfa","Type":"ContainerStarted","Data":"1dea16256c9045477eb23a8920e6e0aa8be230ce3896a585228755ceac81ed93"} Feb 16 21:36:47.919066 master-0 kubenswrapper[38936]: I0216 21:36:47.919044 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:36:47.923069 master-0 kubenswrapper[38936]: I0216 21:36:47.923025 38936 generic.go:334] "Generic (PLEG): container finished" podID="a3ae6146-0a46-4058-a938-0dba04b24a1f" containerID="d5dba880ca436c9fc01181a57a32c8da41545714a30b853a4a038e810b4c4686" exitCode=0 Feb 16 21:36:47.923218 master-0 kubenswrapper[38936]: I0216 21:36:47.923083 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a3ae6146-0a46-4058-a938-0dba04b24a1f","Type":"ContainerDied","Data":"d5dba880ca436c9fc01181a57a32c8da41545714a30b853a4a038e810b4c4686"} Feb 16 21:36:47.968767 master-0 kubenswrapper[38936]: I0216 21:36:47.936287 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zr5cs-config-2lpkf" podStartSLOduration=1.9362655260000001 podStartE2EDuration="1.936265526s" podCreationTimestamp="2026-02-16 21:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:36:47.929558304 +0000 UTC m=+838.281561666" watchObservedRunningTime="2026-02-16 21:36:47.936265526 +0000 UTC m=+838.288268888" Feb 16 21:36:48.034930 master-0 kubenswrapper[38936]: I0216 21:36:48.034851 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.051019324 podStartE2EDuration="1m3.034830848s" podCreationTimestamp="2026-02-16 21:35:45 +0000 UTC" firstStartedPulling="2026-02-16 21:36:06.159409961 +0000 UTC m=+796.511413323" lastFinishedPulling="2026-02-16 21:36:13.143221485 +0000 UTC m=+803.495224847" observedRunningTime="2026-02-16 21:36:48.024270763 +0000 UTC m=+838.376274125" watchObservedRunningTime="2026-02-16 21:36:48.034830848 +0000 UTC m=+838.386834210" Feb 16 21:36:48.950117 master-0 kubenswrapper[38936]: I0216 21:36:48.950066 38936 generic.go:334] "Generic (PLEG): container finished" podID="1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d" containerID="3873e418fbe1888dda88c8ae062427acb57798ef601e34b15ae1d295adf9215f" exitCode=0 Feb 16 21:36:48.950833 master-0 kubenswrapper[38936]: I0216 21:36:48.950131 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zr5cs-config-2lpkf" event={"ID":"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d","Type":"ContainerDied","Data":"3873e418fbe1888dda88c8ae062427acb57798ef601e34b15ae1d295adf9215f"} Feb 16 21:36:48.957912 master-0 kubenswrapper[38936]: I0216 21:36:48.953020 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-w6pqc"] Feb 16 21:36:48.957912 master-0 kubenswrapper[38936]: I0216 21:36:48.954866 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w6pqc" Feb 16 21:36:48.959194 master-0 kubenswrapper[38936]: I0216 21:36:48.959138 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 21:36:48.963268 master-0 kubenswrapper[38936]: I0216 21:36:48.963213 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"f1c80040c2c16b285147cecd48ef868041493112ae8bd2ebe5843a1069485356"} Feb 16 21:36:48.963268 master-0 kubenswrapper[38936]: I0216 21:36:48.963262 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"73f726c9-e2b2-4038-a202-5df2ede23bf5","Type":"ContainerStarted","Data":"e364828656ff207b841b7a61e192c14cd4077e152b7dd2d650fbce5649b8f204"} Feb 16 21:36:48.969897 master-0 kubenswrapper[38936]: I0216 21:36:48.969828 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a3ae6146-0a46-4058-a938-0dba04b24a1f","Type":"ContainerStarted","Data":"5d5427fdca7854aa42f1b8e34cebf461b3bc1aec16ab2134e159781566973b0f"} Feb 16 21:36:48.972903 master-0 kubenswrapper[38936]: I0216 21:36:48.971235 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-w6pqc"] Feb 16 21:36:49.041692 master-0 kubenswrapper[38936]: I0216 21:36:49.038721 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=57.478305118 podStartE2EDuration="1m5.038682244s" podCreationTimestamp="2026-02-16 21:35:44 +0000 UTC" firstStartedPulling="2026-02-16 21:36:05.59843391 +0000 UTC m=+795.950437272" lastFinishedPulling="2026-02-16 21:36:13.158811036 +0000 UTC m=+803.510814398" observedRunningTime="2026-02-16 21:36:49.017700917 +0000 UTC m=+839.369704289" watchObservedRunningTime="2026-02-16 21:36:49.038682244 +0000 UTC m=+839.390685606" Feb 16 21:36:49.046457 master-0 kubenswrapper[38936]: I0216 21:36:49.046373 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/775d3c76-9e00-4f01-bf3c-bdb01e653380-operator-scripts\") pod \"root-account-create-update-w6pqc\" (UID: \"775d3c76-9e00-4f01-bf3c-bdb01e653380\") " pod="openstack/root-account-create-update-w6pqc" Feb 16 21:36:49.046748 master-0 kubenswrapper[38936]: I0216 21:36:49.046550 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr6s5\" (UniqueName: \"kubernetes.io/projected/775d3c76-9e00-4f01-bf3c-bdb01e653380-kube-api-access-fr6s5\") pod \"root-account-create-update-w6pqc\" (UID: \"775d3c76-9e00-4f01-bf3c-bdb01e653380\") " pod="openstack/root-account-create-update-w6pqc" Feb 16 21:36:49.107775 master-0 kubenswrapper[38936]: I0216 21:36:49.107618 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.454251526 podStartE2EDuration="27.107590426s" podCreationTimestamp="2026-02-16 21:36:22 +0000 UTC" firstStartedPulling="2026-02-16 21:36:40.766180135 +0000 UTC m=+831.118183497" lastFinishedPulling="2026-02-16 21:36:46.419519035 +0000 UTC m=+836.771522397" observedRunningTime="2026-02-16 21:36:49.093627779 +0000 UTC m=+839.445631141" watchObservedRunningTime="2026-02-16 21:36:49.107590426 +0000 UTC m=+839.459593788" Feb 16 21:36:49.152169 master-0 kubenswrapper[38936]: I0216 21:36:49.152060 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/775d3c76-9e00-4f01-bf3c-bdb01e653380-operator-scripts\") pod \"root-account-create-update-w6pqc\" (UID: \"775d3c76-9e00-4f01-bf3c-bdb01e653380\") " pod="openstack/root-account-create-update-w6pqc" Feb 16 21:36:49.152169 master-0 kubenswrapper[38936]: I0216 21:36:49.152186 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr6s5\" (UniqueName: \"kubernetes.io/projected/775d3c76-9e00-4f01-bf3c-bdb01e653380-kube-api-access-fr6s5\") pod \"root-account-create-update-w6pqc\" (UID: \"775d3c76-9e00-4f01-bf3c-bdb01e653380\") " pod="openstack/root-account-create-update-w6pqc" Feb 16 21:36:49.155542 master-0 kubenswrapper[38936]: I0216 21:36:49.155489 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/775d3c76-9e00-4f01-bf3c-bdb01e653380-operator-scripts\") pod \"root-account-create-update-w6pqc\" (UID: \"775d3c76-9e00-4f01-bf3c-bdb01e653380\") " pod="openstack/root-account-create-update-w6pqc" Feb 16 21:36:49.184524 master-0 kubenswrapper[38936]: I0216 21:36:49.184442 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr6s5\" (UniqueName: \"kubernetes.io/projected/775d3c76-9e00-4f01-bf3c-bdb01e653380-kube-api-access-fr6s5\") pod \"root-account-create-update-w6pqc\" (UID: \"775d3c76-9e00-4f01-bf3c-bdb01e653380\") " pod="openstack/root-account-create-update-w6pqc" Feb 16 21:36:49.281249 master-0 kubenswrapper[38936]: I0216 21:36:49.281176 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w6pqc" Feb 16 21:36:49.468630 master-0 kubenswrapper[38936]: I0216 21:36:49.467028 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-665cc5d59f-ngldr"] Feb 16 21:36:49.472686 master-0 kubenswrapper[38936]: I0216 21:36:49.469078 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.479893 master-0 kubenswrapper[38936]: I0216 21:36:49.476129 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 21:36:49.518689 master-0 kubenswrapper[38936]: I0216 21:36:49.508097 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-665cc5d59f-ngldr"] Feb 16 21:36:49.670006 master-0 kubenswrapper[38936]: I0216 21:36:49.668137 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-nb\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.670006 master-0 kubenswrapper[38936]: I0216 21:36:49.668236 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvm2m\" (UniqueName: \"kubernetes.io/projected/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-kube-api-access-bvm2m\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.670006 master-0 kubenswrapper[38936]: I0216 21:36:49.668264 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-config\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.670006 master-0 kubenswrapper[38936]: I0216 21:36:49.668584 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-sb\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.670006 master-0 kubenswrapper[38936]: I0216 21:36:49.669293 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-swift-storage-0\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.670006 master-0 kubenswrapper[38936]: I0216 21:36:49.669522 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-svc\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.771953 master-0 kubenswrapper[38936]: I0216 21:36:49.771860 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-sb\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.772329 master-0 kubenswrapper[38936]: I0216 21:36:49.771985 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-swift-storage-0\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.772329 master-0 kubenswrapper[38936]: I0216 21:36:49.772056 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-svc\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.772329 master-0 kubenswrapper[38936]: I0216 21:36:49.772136 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-nb\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.772329 master-0 kubenswrapper[38936]: I0216 21:36:49.772160 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvm2m\" (UniqueName: \"kubernetes.io/projected/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-kube-api-access-bvm2m\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.772329 master-0 kubenswrapper[38936]: I0216 21:36:49.772210 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-config\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.775358 master-0 kubenswrapper[38936]: I0216 21:36:49.775279 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-config\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.777218 master-0 kubenswrapper[38936]: I0216 21:36:49.777177 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-sb\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.778071 master-0 kubenswrapper[38936]: I0216 21:36:49.778035 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-swift-storage-0\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.779066 master-0 kubenswrapper[38936]: I0216 21:36:49.778994 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-svc\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.780293 master-0 kubenswrapper[38936]: I0216 21:36:49.780247 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-nb\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.804909 master-0 kubenswrapper[38936]: I0216 21:36:49.804740 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvm2m\" (UniqueName: \"kubernetes.io/projected/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-kube-api-access-bvm2m\") pod \"dnsmasq-dns-665cc5d59f-ngldr\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:49.867076 master-0 kubenswrapper[38936]: I0216 21:36:49.866578 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:50.015817 master-0 kubenswrapper[38936]: I0216 21:36:50.015669 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-w6pqc"] Feb 16 21:36:50.604684 master-0 kubenswrapper[38936]: I0216 21:36:50.603616 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 21:36:51.195584 master-0 kubenswrapper[38936]: I0216 21:36:51.195502 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zr5cs" Feb 16 21:36:57.306110 master-0 kubenswrapper[38936]: W0216 21:36:57.306029 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod775d3c76_9e00_4f01_bf3c_bdb01e653380.slice/crio-b65edb81436fb4954a659105f46ab16eaf2c9a6d10389d0c963f6b6c65f89da1 WatchSource:0}: Error finding container b65edb81436fb4954a659105f46ab16eaf2c9a6d10389d0c963f6b6c65f89da1: Status 404 returned error can't find the container with id b65edb81436fb4954a659105f46ab16eaf2c9a6d10389d0c963f6b6c65f89da1 Feb 16 21:36:57.343417 master-0 kubenswrapper[38936]: I0216 21:36:57.343345 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 21:36:57.426783 master-0 kubenswrapper[38936]: I0216 21:36:57.426715 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:57.488620 master-0 kubenswrapper[38936]: I0216 21:36:57.487066 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-log-ovn\") pod \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " Feb 16 21:36:57.488620 master-0 kubenswrapper[38936]: I0216 21:36:57.487244 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgsx9\" (UniqueName: \"kubernetes.io/projected/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-kube-api-access-fgsx9\") pod \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " Feb 16 21:36:57.488620 master-0 kubenswrapper[38936]: I0216 21:36:57.487420 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run\") pod \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " Feb 16 21:36:57.488620 master-0 kubenswrapper[38936]: I0216 21:36:57.487472 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-scripts\") pod \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " Feb 16 21:36:57.488620 master-0 kubenswrapper[38936]: I0216 21:36:57.487536 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-additional-scripts\") pod \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " Feb 16 21:36:57.488620 master-0 kubenswrapper[38936]: I0216 21:36:57.487596 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run-ovn\") pod \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\" (UID: \"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d\") " Feb 16 21:36:57.488620 master-0 kubenswrapper[38936]: I0216 21:36:57.487918 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run" (OuterVolumeSpecName: "var-run") pod "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d" (UID: "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:36:57.488620 master-0 kubenswrapper[38936]: I0216 21:36:57.487955 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d" (UID: "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:36:57.488620 master-0 kubenswrapper[38936]: I0216 21:36:57.488333 38936 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:57.488620 master-0 kubenswrapper[38936]: I0216 21:36:57.488355 38936 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:57.489526 master-0 kubenswrapper[38936]: I0216 21:36:57.488958 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d" (UID: "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:57.489526 master-0 kubenswrapper[38936]: I0216 21:36:57.489001 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d" (UID: "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:36:57.490320 master-0 kubenswrapper[38936]: I0216 21:36:57.490288 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-scripts" (OuterVolumeSpecName: "scripts") pod "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d" (UID: "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:57.491315 master-0 kubenswrapper[38936]: I0216 21:36:57.491283 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-kube-api-access-fgsx9" (OuterVolumeSpecName: "kube-api-access-fgsx9") pod "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d" (UID: "1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d"). InnerVolumeSpecName "kube-api-access-fgsx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:57.591046 master-0 kubenswrapper[38936]: I0216 21:36:57.590895 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:57.591046 master-0 kubenswrapper[38936]: I0216 21:36:57.590987 38936 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:57.591046 master-0 kubenswrapper[38936]: I0216 21:36:57.591003 38936 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:57.591046 master-0 kubenswrapper[38936]: I0216 21:36:57.591015 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgsx9\" (UniqueName: \"kubernetes.io/projected/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d-kube-api-access-fgsx9\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:57.843934 master-0 kubenswrapper[38936]: I0216 21:36:57.843851 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-665cc5d59f-ngldr"] Feb 16 21:36:58.098332 master-0 kubenswrapper[38936]: I0216 21:36:58.098208 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" event={"ID":"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b","Type":"ContainerStarted","Data":"eccc3bfa5f692357557d2616613174dd81ca2daec309409325d69539556fe983"} Feb 16 21:36:58.098332 master-0 kubenswrapper[38936]: I0216 21:36:58.098264 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" event={"ID":"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b","Type":"ContainerStarted","Data":"6b9e4edd0600251ed1010926570a07fcf50fe8870c368298163c86858d773b29"} Feb 16 21:36:58.100978 master-0 kubenswrapper[38936]: I0216 21:36:58.100922 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zr5cs-config-2lpkf" event={"ID":"1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d","Type":"ContainerDied","Data":"589ed86cb16b9864664d167325986b60a29a8cf911e35378f2485dc4f9a52ee5"} Feb 16 21:36:58.101035 master-0 kubenswrapper[38936]: I0216 21:36:58.100981 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="589ed86cb16b9864664d167325986b60a29a8cf911e35378f2485dc4f9a52ee5" Feb 16 21:36:58.103244 master-0 kubenswrapper[38936]: I0216 21:36:58.103197 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zr5cs-config-2lpkf" Feb 16 21:36:58.109219 master-0 kubenswrapper[38936]: I0216 21:36:58.109174 38936 generic.go:334] "Generic (PLEG): container finished" podID="775d3c76-9e00-4f01-bf3c-bdb01e653380" containerID="8c8612a802179d4089cd6f0deb7f08c4aba50fad55f84d560869c318f2875b5c" exitCode=0 Feb 16 21:36:58.109291 master-0 kubenswrapper[38936]: I0216 21:36:58.109227 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w6pqc" event={"ID":"775d3c76-9e00-4f01-bf3c-bdb01e653380","Type":"ContainerDied","Data":"8c8612a802179d4089cd6f0deb7f08c4aba50fad55f84d560869c318f2875b5c"} Feb 16 21:36:58.109291 master-0 kubenswrapper[38936]: I0216 21:36:58.109250 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w6pqc" event={"ID":"775d3c76-9e00-4f01-bf3c-bdb01e653380","Type":"ContainerStarted","Data":"b65edb81436fb4954a659105f46ab16eaf2c9a6d10389d0c963f6b6c65f89da1"} Feb 16 21:36:58.289474 master-0 kubenswrapper[38936]: E0216 21:36:58.289408 38936 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod775d3c76_9e00_4f01_bf3c_bdb01e653380.slice/crio-8c8612a802179d4089cd6f0deb7f08c4aba50fad55f84d560869c318f2875b5c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod775d3c76_9e00_4f01_bf3c_bdb01e653380.slice/crio-conmon-8c8612a802179d4089cd6f0deb7f08c4aba50fad55f84d560869c318f2875b5c.scope\": RecentStats: unable to find data in memory cache]" Feb 16 21:36:58.682848 master-0 kubenswrapper[38936]: I0216 21:36:58.682689 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zr5cs-config-2lpkf"] Feb 16 21:36:58.694859 master-0 kubenswrapper[38936]: I0216 21:36:58.694547 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zr5cs-config-2lpkf"] Feb 16 21:36:58.795810 master-0 kubenswrapper[38936]: I0216 21:36:58.795725 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zr5cs-config-rqj7w"] Feb 16 21:36:58.796373 master-0 kubenswrapper[38936]: E0216 21:36:58.796347 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d" containerName="ovn-config" Feb 16 21:36:58.796373 master-0 kubenswrapper[38936]: I0216 21:36:58.796365 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d" containerName="ovn-config" Feb 16 21:36:58.796694 master-0 kubenswrapper[38936]: I0216 21:36:58.796665 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d" containerName="ovn-config" Feb 16 21:36:58.797445 master-0 kubenswrapper[38936]: I0216 21:36:58.797411 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:58.800247 master-0 kubenswrapper[38936]: I0216 21:36:58.800199 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 21:36:58.831762 master-0 kubenswrapper[38936]: I0216 21:36:58.828022 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zr5cs-config-rqj7w"] Feb 16 21:36:58.924756 master-0 kubenswrapper[38936]: I0216 21:36:58.924683 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:58.924756 master-0 kubenswrapper[38936]: I0216 21:36:58.924758 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-scripts\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:58.925009 master-0 kubenswrapper[38936]: I0216 21:36:58.924964 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-log-ovn\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:58.925086 master-0 kubenswrapper[38936]: I0216 21:36:58.925044 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-additional-scripts\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:58.925207 master-0 kubenswrapper[38936]: I0216 21:36:58.925164 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr9t2\" (UniqueName: \"kubernetes.io/projected/4f292536-2c0c-4b52-8197-672d0636290b-kube-api-access-zr9t2\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:58.925296 master-0 kubenswrapper[38936]: I0216 21:36:58.925274 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run-ovn\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.028232 master-0 kubenswrapper[38936]: I0216 21:36:59.028145 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-log-ovn\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.028481 master-0 kubenswrapper[38936]: I0216 21:36:59.028298 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-additional-scripts\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.028481 master-0 kubenswrapper[38936]: I0216 21:36:59.028321 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-log-ovn\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.028481 master-0 kubenswrapper[38936]: I0216 21:36:59.028343 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr9t2\" (UniqueName: \"kubernetes.io/projected/4f292536-2c0c-4b52-8197-672d0636290b-kube-api-access-zr9t2\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.028481 master-0 kubenswrapper[38936]: I0216 21:36:59.028387 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run-ovn\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.028481 master-0 kubenswrapper[38936]: I0216 21:36:59.028464 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.028671 master-0 kubenswrapper[38936]: I0216 21:36:59.028488 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-scripts\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.028757 master-0 kubenswrapper[38936]: I0216 21:36:59.028724 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.028822 master-0 kubenswrapper[38936]: I0216 21:36:59.028755 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run-ovn\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.029083 master-0 kubenswrapper[38936]: I0216 21:36:59.029060 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-additional-scripts\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.031559 master-0 kubenswrapper[38936]: I0216 21:36:59.031520 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-scripts\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.046109 master-0 kubenswrapper[38936]: I0216 21:36:59.046049 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr9t2\" (UniqueName: \"kubernetes.io/projected/4f292536-2c0c-4b52-8197-672d0636290b-kube-api-access-zr9t2\") pod \"ovn-controller-zr5cs-config-rqj7w\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.125110 master-0 kubenswrapper[38936]: I0216 21:36:59.125030 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hfz86" event={"ID":"4bb312b5-f96b-4689-9d30-c4c878aae0ec","Type":"ContainerStarted","Data":"1a03fb329b79a652f93bc0d8cf6903fee3b46a1c62d4b68c751e859b6f865732"} Feb 16 21:36:59.127542 master-0 kubenswrapper[38936]: I0216 21:36:59.127508 38936 generic.go:334] "Generic (PLEG): container finished" podID="0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" containerID="eccc3bfa5f692357557d2616613174dd81ca2daec309409325d69539556fe983" exitCode=0 Feb 16 21:36:59.127796 master-0 kubenswrapper[38936]: I0216 21:36:59.127700 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" event={"ID":"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b","Type":"ContainerDied","Data":"eccc3bfa5f692357557d2616613174dd81ca2daec309409325d69539556fe983"} Feb 16 21:36:59.127880 master-0 kubenswrapper[38936]: I0216 21:36:59.127795 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" event={"ID":"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b","Type":"ContainerStarted","Data":"cd1aa9428bd03e10b081fbb04aaff88d0fd1bb75b067e1e5ddbd3a82235b968d"} Feb 16 21:36:59.156205 master-0 kubenswrapper[38936]: I0216 21:36:59.156133 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:36:59.178294 master-0 kubenswrapper[38936]: I0216 21:36:59.178180 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-hfz86" podStartSLOduration=2.9469517549999997 podStartE2EDuration="18.178145475s" podCreationTimestamp="2026-02-16 21:36:41 +0000 UTC" firstStartedPulling="2026-02-16 21:36:42.641943714 +0000 UTC m=+832.993947076" lastFinishedPulling="2026-02-16 21:36:57.873137434 +0000 UTC m=+848.225140796" observedRunningTime="2026-02-16 21:36:59.156052338 +0000 UTC m=+849.508055710" watchObservedRunningTime="2026-02-16 21:36:59.178145475 +0000 UTC m=+849.530148837" Feb 16 21:36:59.212075 master-0 kubenswrapper[38936]: I0216 21:36:59.211943 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" podStartSLOduration=10.211904527 podStartE2EDuration="10.211904527s" podCreationTimestamp="2026-02-16 21:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:36:59.179244865 +0000 UTC m=+849.531248237" watchObservedRunningTime="2026-02-16 21:36:59.211904527 +0000 UTC m=+849.563907889" Feb 16 21:36:59.636728 master-0 kubenswrapper[38936]: I0216 21:36:59.636683 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w6pqc" Feb 16 21:36:59.754274 master-0 kubenswrapper[38936]: I0216 21:36:59.753911 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/775d3c76-9e00-4f01-bf3c-bdb01e653380-operator-scripts\") pod \"775d3c76-9e00-4f01-bf3c-bdb01e653380\" (UID: \"775d3c76-9e00-4f01-bf3c-bdb01e653380\") " Feb 16 21:36:59.754974 master-0 kubenswrapper[38936]: I0216 21:36:59.754843 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/775d3c76-9e00-4f01-bf3c-bdb01e653380-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "775d3c76-9e00-4f01-bf3c-bdb01e653380" (UID: "775d3c76-9e00-4f01-bf3c-bdb01e653380"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:36:59.755036 master-0 kubenswrapper[38936]: I0216 21:36:59.754992 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr6s5\" (UniqueName: \"kubernetes.io/projected/775d3c76-9e00-4f01-bf3c-bdb01e653380-kube-api-access-fr6s5\") pod \"775d3c76-9e00-4f01-bf3c-bdb01e653380\" (UID: \"775d3c76-9e00-4f01-bf3c-bdb01e653380\") " Feb 16 21:36:59.756155 master-0 kubenswrapper[38936]: I0216 21:36:59.756125 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/775d3c76-9e00-4f01-bf3c-bdb01e653380-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:59.758106 master-0 kubenswrapper[38936]: I0216 21:36:59.758061 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775d3c76-9e00-4f01-bf3c-bdb01e653380-kube-api-access-fr6s5" (OuterVolumeSpecName: "kube-api-access-fr6s5") pod "775d3c76-9e00-4f01-bf3c-bdb01e653380" (UID: "775d3c76-9e00-4f01-bf3c-bdb01e653380"). InnerVolumeSpecName "kube-api-access-fr6s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:36:59.774016 master-0 kubenswrapper[38936]: I0216 21:36:59.771167 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zr5cs-config-rqj7w"] Feb 16 21:36:59.858841 master-0 kubenswrapper[38936]: I0216 21:36:59.858699 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr6s5\" (UniqueName: \"kubernetes.io/projected/775d3c76-9e00-4f01-bf3c-bdb01e653380-kube-api-access-fr6s5\") on node \"master-0\" DevicePath \"\"" Feb 16 21:36:59.867890 master-0 kubenswrapper[38936]: I0216 21:36:59.867831 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:36:59.889870 master-0 kubenswrapper[38936]: I0216 21:36:59.889810 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d" path="/var/lib/kubelet/pods/1bc6cb42-e39c-45b1-90fd-4bbb37f8af7d/volumes" Feb 16 21:37:00.140680 master-0 kubenswrapper[38936]: I0216 21:37:00.140588 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w6pqc" event={"ID":"775d3c76-9e00-4f01-bf3c-bdb01e653380","Type":"ContainerDied","Data":"b65edb81436fb4954a659105f46ab16eaf2c9a6d10389d0c963f6b6c65f89da1"} Feb 16 21:37:00.140680 master-0 kubenswrapper[38936]: I0216 21:37:00.140668 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b65edb81436fb4954a659105f46ab16eaf2c9a6d10389d0c963f6b6c65f89da1" Feb 16 21:37:00.140936 master-0 kubenswrapper[38936]: I0216 21:37:00.140726 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w6pqc" Feb 16 21:37:00.145274 master-0 kubenswrapper[38936]: I0216 21:37:00.145239 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zr5cs-config-rqj7w" event={"ID":"4f292536-2c0c-4b52-8197-672d0636290b","Type":"ContainerStarted","Data":"9b40ae4cd170384825d22de37c41e591443b6c843f71978e0eb1569da629a3aa"} Feb 16 21:37:00.145274 master-0 kubenswrapper[38936]: I0216 21:37:00.145267 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zr5cs-config-rqj7w" event={"ID":"4f292536-2c0c-4b52-8197-672d0636290b","Type":"ContainerStarted","Data":"e9163c1698ead9b3c1644f1c45bddbe251b1ee603c84f28b68fa32cbdca1da52"} Feb 16 21:37:00.176628 master-0 kubenswrapper[38936]: I0216 21:37:00.176538 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zr5cs-config-rqj7w" podStartSLOduration=2.176518083 podStartE2EDuration="2.176518083s" podCreationTimestamp="2026-02-16 21:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:00.171013534 +0000 UTC m=+850.523016906" watchObservedRunningTime="2026-02-16 21:37:00.176518083 +0000 UTC m=+850.528521445" Feb 16 21:37:00.604958 master-0 kubenswrapper[38936]: I0216 21:37:00.604894 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 21:37:01.064409 master-0 kubenswrapper[38936]: I0216 21:37:01.063356 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-gkccd"] Feb 16 21:37:01.064409 master-0 kubenswrapper[38936]: E0216 21:37:01.064097 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775d3c76-9e00-4f01-bf3c-bdb01e653380" containerName="mariadb-account-create-update" Feb 16 21:37:01.064409 master-0 kubenswrapper[38936]: I0216 21:37:01.064112 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="775d3c76-9e00-4f01-bf3c-bdb01e653380" containerName="mariadb-account-create-update" Feb 16 21:37:01.064409 master-0 kubenswrapper[38936]: I0216 21:37:01.064357 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="775d3c76-9e00-4f01-bf3c-bdb01e653380" containerName="mariadb-account-create-update" Feb 16 21:37:01.065426 master-0 kubenswrapper[38936]: I0216 21:37:01.065362 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gkccd" Feb 16 21:37:01.079410 master-0 kubenswrapper[38936]: I0216 21:37:01.077923 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gkccd"] Feb 16 21:37:01.097015 master-0 kubenswrapper[38936]: I0216 21:37:01.096924 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvnth\" (UniqueName: \"kubernetes.io/projected/c5598012-498e-44d4-9bb0-8d5aadef2f5b-kube-api-access-dvnth\") pod \"cinder-db-create-gkccd\" (UID: \"c5598012-498e-44d4-9bb0-8d5aadef2f5b\") " pod="openstack/cinder-db-create-gkccd" Feb 16 21:37:01.097217 master-0 kubenswrapper[38936]: I0216 21:37:01.097111 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5598012-498e-44d4-9bb0-8d5aadef2f5b-operator-scripts\") pod \"cinder-db-create-gkccd\" (UID: \"c5598012-498e-44d4-9bb0-8d5aadef2f5b\") " pod="openstack/cinder-db-create-gkccd" Feb 16 21:37:01.180701 master-0 kubenswrapper[38936]: I0216 21:37:01.180550 38936 generic.go:334] "Generic (PLEG): container finished" podID="4f292536-2c0c-4b52-8197-672d0636290b" containerID="9b40ae4cd170384825d22de37c41e591443b6c843f71978e0eb1569da629a3aa" exitCode=0 Feb 16 21:37:01.183243 master-0 kubenswrapper[38936]: I0216 21:37:01.183163 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c2ba-account-create-update-x7f7j"] Feb 16 21:37:01.186948 master-0 kubenswrapper[38936]: I0216 21:37:01.185357 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zr5cs-config-rqj7w" event={"ID":"4f292536-2c0c-4b52-8197-672d0636290b","Type":"ContainerDied","Data":"9b40ae4cd170384825d22de37c41e591443b6c843f71978e0eb1569da629a3aa"} Feb 16 21:37:01.186948 master-0 kubenswrapper[38936]: I0216 21:37:01.185444 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c2ba-account-create-update-x7f7j" Feb 16 21:37:01.191780 master-0 kubenswrapper[38936]: I0216 21:37:01.191053 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 21:37:01.200298 master-0 kubenswrapper[38936]: I0216 21:37:01.200218 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvnth\" (UniqueName: \"kubernetes.io/projected/c5598012-498e-44d4-9bb0-8d5aadef2f5b-kube-api-access-dvnth\") pod \"cinder-db-create-gkccd\" (UID: \"c5598012-498e-44d4-9bb0-8d5aadef2f5b\") " pod="openstack/cinder-db-create-gkccd" Feb 16 21:37:01.200746 master-0 kubenswrapper[38936]: I0216 21:37:01.200678 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5598012-498e-44d4-9bb0-8d5aadef2f5b-operator-scripts\") pod \"cinder-db-create-gkccd\" (UID: \"c5598012-498e-44d4-9bb0-8d5aadef2f5b\") " pod="openstack/cinder-db-create-gkccd" Feb 16 21:37:01.201703 master-0 kubenswrapper[38936]: I0216 21:37:01.201604 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5598012-498e-44d4-9bb0-8d5aadef2f5b-operator-scripts\") pod \"cinder-db-create-gkccd\" (UID: \"c5598012-498e-44d4-9bb0-8d5aadef2f5b\") " pod="openstack/cinder-db-create-gkccd" Feb 16 21:37:01.203531 master-0 kubenswrapper[38936]: I0216 21:37:01.203496 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c2ba-account-create-update-x7f7j"] Feb 16 21:37:01.224055 master-0 kubenswrapper[38936]: I0216 21:37:01.218703 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvnth\" (UniqueName: \"kubernetes.io/projected/c5598012-498e-44d4-9bb0-8d5aadef2f5b-kube-api-access-dvnth\") pod \"cinder-db-create-gkccd\" (UID: \"c5598012-498e-44d4-9bb0-8d5aadef2f5b\") " pod="openstack/cinder-db-create-gkccd" Feb 16 21:37:01.303486 master-0 kubenswrapper[38936]: I0216 21:37:01.303354 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqw4b\" (UniqueName: \"kubernetes.io/projected/5e22cd12-ef73-470e-9543-b328a46c9c0d-kube-api-access-bqw4b\") pod \"cinder-c2ba-account-create-update-x7f7j\" (UID: \"5e22cd12-ef73-470e-9543-b328a46c9c0d\") " pod="openstack/cinder-c2ba-account-create-update-x7f7j" Feb 16 21:37:01.303841 master-0 kubenswrapper[38936]: I0216 21:37:01.303582 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22cd12-ef73-470e-9543-b328a46c9c0d-operator-scripts\") pod \"cinder-c2ba-account-create-update-x7f7j\" (UID: \"5e22cd12-ef73-470e-9543-b328a46c9c0d\") " pod="openstack/cinder-c2ba-account-create-update-x7f7j" Feb 16 21:37:01.340380 master-0 kubenswrapper[38936]: I0216 21:37:01.340307 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vprb4"] Feb 16 21:37:01.345139 master-0 kubenswrapper[38936]: I0216 21:37:01.344267 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:01.359771 master-0 kubenswrapper[38936]: I0216 21:37:01.359711 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:37:01.360198 master-0 kubenswrapper[38936]: I0216 21:37:01.360175 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:37:01.360382 master-0 kubenswrapper[38936]: I0216 21:37:01.360359 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:37:01.389746 master-0 kubenswrapper[38936]: I0216 21:37:01.386707 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gkccd" Feb 16 21:37:01.397118 master-0 kubenswrapper[38936]: I0216 21:37:01.396448 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vprb4"] Feb 16 21:37:01.408712 master-0 kubenswrapper[38936]: I0216 21:37:01.408537 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22cd12-ef73-470e-9543-b328a46c9c0d-operator-scripts\") pod \"cinder-c2ba-account-create-update-x7f7j\" (UID: \"5e22cd12-ef73-470e-9543-b328a46c9c0d\") " pod="openstack/cinder-c2ba-account-create-update-x7f7j" Feb 16 21:37:01.409008 master-0 kubenswrapper[38936]: I0216 21:37:01.408865 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-config-data\") pod \"keystone-db-sync-vprb4\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:01.409008 master-0 kubenswrapper[38936]: I0216 21:37:01.408948 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtpmv\" (UniqueName: \"kubernetes.io/projected/03eeee4e-9496-45a9-a3f8-d3a300085c91-kube-api-access-xtpmv\") pod \"keystone-db-sync-vprb4\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:01.409307 master-0 kubenswrapper[38936]: I0216 21:37:01.409157 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqw4b\" (UniqueName: \"kubernetes.io/projected/5e22cd12-ef73-470e-9543-b328a46c9c0d-kube-api-access-bqw4b\") pod \"cinder-c2ba-account-create-update-x7f7j\" (UID: \"5e22cd12-ef73-470e-9543-b328a46c9c0d\") " pod="openstack/cinder-c2ba-account-create-update-x7f7j" Feb 16 21:37:01.409307 master-0 kubenswrapper[38936]: I0216 21:37:01.409254 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-combined-ca-bundle\") pod \"keystone-db-sync-vprb4\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:01.414770 master-0 kubenswrapper[38936]: I0216 21:37:01.414641 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22cd12-ef73-470e-9543-b328a46c9c0d-operator-scripts\") pod \"cinder-c2ba-account-create-update-x7f7j\" (UID: \"5e22cd12-ef73-470e-9543-b328a46c9c0d\") " pod="openstack/cinder-c2ba-account-create-update-x7f7j" Feb 16 21:37:01.431353 master-0 kubenswrapper[38936]: I0216 21:37:01.420123 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-m4b9n"] Feb 16 21:37:01.431353 master-0 kubenswrapper[38936]: I0216 21:37:01.424088 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m4b9n" Feb 16 21:37:01.436050 master-0 kubenswrapper[38936]: I0216 21:37:01.436000 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqw4b\" (UniqueName: \"kubernetes.io/projected/5e22cd12-ef73-470e-9543-b328a46c9c0d-kube-api-access-bqw4b\") pod \"cinder-c2ba-account-create-update-x7f7j\" (UID: \"5e22cd12-ef73-470e-9543-b328a46c9c0d\") " pod="openstack/cinder-c2ba-account-create-update-x7f7j" Feb 16 21:37:01.439785 master-0 kubenswrapper[38936]: I0216 21:37:01.439717 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m4b9n"] Feb 16 21:37:01.454873 master-0 kubenswrapper[38936]: I0216 21:37:01.454779 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d15-account-create-update-lldsm"] Feb 16 21:37:01.456695 master-0 kubenswrapper[38936]: I0216 21:37:01.456613 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d15-account-create-update-lldsm" Feb 16 21:37:01.458959 master-0 kubenswrapper[38936]: I0216 21:37:01.458895 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 21:37:01.471765 master-0 kubenswrapper[38936]: I0216 21:37:01.471694 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d15-account-create-update-lldsm"] Feb 16 21:37:01.515341 master-0 kubenswrapper[38936]: I0216 21:37:01.512564 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzql5\" (UniqueName: \"kubernetes.io/projected/34fa09a5-23a7-4aea-946f-1005774cd8b8-kube-api-access-mzql5\") pod \"neutron-5d15-account-create-update-lldsm\" (UID: \"34fa09a5-23a7-4aea-946f-1005774cd8b8\") " pod="openstack/neutron-5d15-account-create-update-lldsm" Feb 16 21:37:01.515341 master-0 kubenswrapper[38936]: I0216 21:37:01.512870 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g4n8\" (UniqueName: \"kubernetes.io/projected/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-kube-api-access-6g4n8\") pod \"neutron-db-create-m4b9n\" (UID: \"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f\") " pod="openstack/neutron-db-create-m4b9n" Feb 16 21:37:01.515341 master-0 kubenswrapper[38936]: I0216 21:37:01.513009 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34fa09a5-23a7-4aea-946f-1005774cd8b8-operator-scripts\") pod \"neutron-5d15-account-create-update-lldsm\" (UID: \"34fa09a5-23a7-4aea-946f-1005774cd8b8\") " pod="openstack/neutron-5d15-account-create-update-lldsm" Feb 16 21:37:01.515341 master-0 kubenswrapper[38936]: I0216 21:37:01.513117 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-config-data\") pod \"keystone-db-sync-vprb4\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:01.515341 master-0 kubenswrapper[38936]: I0216 21:37:01.513270 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtpmv\" (UniqueName: \"kubernetes.io/projected/03eeee4e-9496-45a9-a3f8-d3a300085c91-kube-api-access-xtpmv\") pod \"keystone-db-sync-vprb4\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:01.515341 master-0 kubenswrapper[38936]: I0216 21:37:01.513859 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-combined-ca-bundle\") pod \"keystone-db-sync-vprb4\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:01.515341 master-0 kubenswrapper[38936]: I0216 21:37:01.514050 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-operator-scripts\") pod \"neutron-db-create-m4b9n\" (UID: \"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f\") " pod="openstack/neutron-db-create-m4b9n" Feb 16 21:37:01.519687 master-0 kubenswrapper[38936]: I0216 21:37:01.519328 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-config-data\") pod \"keystone-db-sync-vprb4\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:01.532533 master-0 kubenswrapper[38936]: I0216 21:37:01.531157 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-combined-ca-bundle\") pod \"keystone-db-sync-vprb4\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:01.536662 master-0 kubenswrapper[38936]: I0216 21:37:01.535457 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtpmv\" (UniqueName: \"kubernetes.io/projected/03eeee4e-9496-45a9-a3f8-d3a300085c91-kube-api-access-xtpmv\") pod \"keystone-db-sync-vprb4\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:01.574008 master-0 kubenswrapper[38936]: I0216 21:37:01.573955 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c2ba-account-create-update-x7f7j" Feb 16 21:37:01.617963 master-0 kubenswrapper[38936]: I0216 21:37:01.617085 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzql5\" (UniqueName: \"kubernetes.io/projected/34fa09a5-23a7-4aea-946f-1005774cd8b8-kube-api-access-mzql5\") pod \"neutron-5d15-account-create-update-lldsm\" (UID: \"34fa09a5-23a7-4aea-946f-1005774cd8b8\") " pod="openstack/neutron-5d15-account-create-update-lldsm" Feb 16 21:37:01.617963 master-0 kubenswrapper[38936]: I0216 21:37:01.617179 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g4n8\" (UniqueName: \"kubernetes.io/projected/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-kube-api-access-6g4n8\") pod \"neutron-db-create-m4b9n\" (UID: \"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f\") " pod="openstack/neutron-db-create-m4b9n" Feb 16 21:37:01.617963 master-0 kubenswrapper[38936]: I0216 21:37:01.617247 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34fa09a5-23a7-4aea-946f-1005774cd8b8-operator-scripts\") pod \"neutron-5d15-account-create-update-lldsm\" (UID: \"34fa09a5-23a7-4aea-946f-1005774cd8b8\") " pod="openstack/neutron-5d15-account-create-update-lldsm" Feb 16 21:37:01.617963 master-0 kubenswrapper[38936]: I0216 21:37:01.617492 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-operator-scripts\") pod \"neutron-db-create-m4b9n\" (UID: \"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f\") " pod="openstack/neutron-db-create-m4b9n" Feb 16 21:37:01.619249 master-0 kubenswrapper[38936]: I0216 21:37:01.619205 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34fa09a5-23a7-4aea-946f-1005774cd8b8-operator-scripts\") pod \"neutron-5d15-account-create-update-lldsm\" (UID: \"34fa09a5-23a7-4aea-946f-1005774cd8b8\") " pod="openstack/neutron-5d15-account-create-update-lldsm" Feb 16 21:37:01.620544 master-0 kubenswrapper[38936]: I0216 21:37:01.619484 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-operator-scripts\") pod \"neutron-db-create-m4b9n\" (UID: \"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f\") " pod="openstack/neutron-db-create-m4b9n" Feb 16 21:37:01.637939 master-0 kubenswrapper[38936]: I0216 21:37:01.636588 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g4n8\" (UniqueName: \"kubernetes.io/projected/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-kube-api-access-6g4n8\") pod \"neutron-db-create-m4b9n\" (UID: \"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f\") " pod="openstack/neutron-db-create-m4b9n" Feb 16 21:37:01.637939 master-0 kubenswrapper[38936]: I0216 21:37:01.637211 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzql5\" (UniqueName: \"kubernetes.io/projected/34fa09a5-23a7-4aea-946f-1005774cd8b8-kube-api-access-mzql5\") pod \"neutron-5d15-account-create-update-lldsm\" (UID: \"34fa09a5-23a7-4aea-946f-1005774cd8b8\") " pod="openstack/neutron-5d15-account-create-update-lldsm" Feb 16 21:37:01.701855 master-0 kubenswrapper[38936]: I0216 21:37:01.694505 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:01.819765 master-0 kubenswrapper[38936]: I0216 21:37:01.819709 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m4b9n" Feb 16 21:37:01.906725 master-0 kubenswrapper[38936]: I0216 21:37:01.905977 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d15-account-create-update-lldsm" Feb 16 21:37:01.909043 master-0 kubenswrapper[38936]: I0216 21:37:01.908991 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-gkccd"] Feb 16 21:37:02.054861 master-0 kubenswrapper[38936]: I0216 21:37:02.054804 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 21:37:02.128936 master-0 kubenswrapper[38936]: W0216 21:37:02.128692 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e22cd12_ef73_470e_9543_b328a46c9c0d.slice/crio-97a7466cb939b119dcefeaf470668041257a47600a121da4ebce9715721f9a4b WatchSource:0}: Error finding container 97a7466cb939b119dcefeaf470668041257a47600a121da4ebce9715721f9a4b: Status 404 returned error can't find the container with id 97a7466cb939b119dcefeaf470668041257a47600a121da4ebce9715721f9a4b Feb 16 21:37:02.132664 master-0 kubenswrapper[38936]: I0216 21:37:02.132618 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c2ba-account-create-update-x7f7j"] Feb 16 21:37:02.217874 master-0 kubenswrapper[38936]: I0216 21:37:02.215892 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gkccd" event={"ID":"c5598012-498e-44d4-9bb0-8d5aadef2f5b","Type":"ContainerStarted","Data":"64e1087a03e645001355b579d504384b592bad4233f263992828ae7fadb08054"} Feb 16 21:37:02.217874 master-0 kubenswrapper[38936]: I0216 21:37:02.215966 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gkccd" event={"ID":"c5598012-498e-44d4-9bb0-8d5aadef2f5b","Type":"ContainerStarted","Data":"2a9fb7d89f1d219c081c7ab32007c1ff2bc46476a8b01d1d081809b0cf0a3195"} Feb 16 21:37:02.230522 master-0 kubenswrapper[38936]: I0216 21:37:02.230433 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c2ba-account-create-update-x7f7j" event={"ID":"5e22cd12-ef73-470e-9543-b328a46c9c0d","Type":"ContainerStarted","Data":"97a7466cb939b119dcefeaf470668041257a47600a121da4ebce9715721f9a4b"} Feb 16 21:37:02.248678 master-0 kubenswrapper[38936]: I0216 21:37:02.246092 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-gkccd" podStartSLOduration=1.246074156 podStartE2EDuration="1.246074156s" podCreationTimestamp="2026-02-16 21:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:02.240534467 +0000 UTC m=+852.592537839" watchObservedRunningTime="2026-02-16 21:37:02.246074156 +0000 UTC m=+852.598077518" Feb 16 21:37:02.308665 master-0 kubenswrapper[38936]: I0216 21:37:02.306389 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vprb4"] Feb 16 21:37:02.587475 master-0 kubenswrapper[38936]: W0216 21:37:02.585477 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6a2a6fa_653f_4e21_a4b5_09bed56ec48f.slice/crio-4b1b868394a559e1c86142a2d27523557c1e8fe90dc0b22ff3896ff866cd7ace WatchSource:0}: Error finding container 4b1b868394a559e1c86142a2d27523557c1e8fe90dc0b22ff3896ff866cd7ace: Status 404 returned error can't find the container with id 4b1b868394a559e1c86142a2d27523557c1e8fe90dc0b22ff3896ff866cd7ace Feb 16 21:37:02.730674 master-0 kubenswrapper[38936]: I0216 21:37:02.724358 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m4b9n"] Feb 16 21:37:02.805847 master-0 kubenswrapper[38936]: I0216 21:37:02.805695 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d15-account-create-update-lldsm"] Feb 16 21:37:03.058410 master-0 kubenswrapper[38936]: I0216 21:37:03.058276 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:37:03.072432 master-0 kubenswrapper[38936]: I0216 21:37:03.072369 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-scripts\") pod \"4f292536-2c0c-4b52-8197-672d0636290b\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " Feb 16 21:37:03.072570 master-0 kubenswrapper[38936]: I0216 21:37:03.072466 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run\") pod \"4f292536-2c0c-4b52-8197-672d0636290b\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " Feb 16 21:37:03.072570 master-0 kubenswrapper[38936]: I0216 21:37:03.072503 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr9t2\" (UniqueName: \"kubernetes.io/projected/4f292536-2c0c-4b52-8197-672d0636290b-kube-api-access-zr9t2\") pod \"4f292536-2c0c-4b52-8197-672d0636290b\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " Feb 16 21:37:03.072570 master-0 kubenswrapper[38936]: I0216 21:37:03.072535 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run-ovn\") pod \"4f292536-2c0c-4b52-8197-672d0636290b\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " Feb 16 21:37:03.072804 master-0 kubenswrapper[38936]: I0216 21:37:03.072579 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-log-ovn\") pod \"4f292536-2c0c-4b52-8197-672d0636290b\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " Feb 16 21:37:03.072804 master-0 kubenswrapper[38936]: I0216 21:37:03.072604 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-additional-scripts\") pod \"4f292536-2c0c-4b52-8197-672d0636290b\" (UID: \"4f292536-2c0c-4b52-8197-672d0636290b\") " Feb 16 21:37:03.073349 master-0 kubenswrapper[38936]: I0216 21:37:03.073286 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4f292536-2c0c-4b52-8197-672d0636290b" (UID: "4f292536-2c0c-4b52-8197-672d0636290b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:37:03.073349 master-0 kubenswrapper[38936]: I0216 21:37:03.073297 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run" (OuterVolumeSpecName: "var-run") pod "4f292536-2c0c-4b52-8197-672d0636290b" (UID: "4f292536-2c0c-4b52-8197-672d0636290b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:37:03.073503 master-0 kubenswrapper[38936]: I0216 21:37:03.073283 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4f292536-2c0c-4b52-8197-672d0636290b" (UID: "4f292536-2c0c-4b52-8197-672d0636290b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:37:03.073858 master-0 kubenswrapper[38936]: I0216 21:37:03.073776 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4f292536-2c0c-4b52-8197-672d0636290b" (UID: "4f292536-2c0c-4b52-8197-672d0636290b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:03.074493 master-0 kubenswrapper[38936]: I0216 21:37:03.074239 38936 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:03.074493 master-0 kubenswrapper[38936]: I0216 21:37:03.074270 38936 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:03.074493 master-0 kubenswrapper[38936]: I0216 21:37:03.074282 38936 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f292536-2c0c-4b52-8197-672d0636290b-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:03.074493 master-0 kubenswrapper[38936]: I0216 21:37:03.074290 38936 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:03.076724 master-0 kubenswrapper[38936]: I0216 21:37:03.076354 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-scripts" (OuterVolumeSpecName: "scripts") pod "4f292536-2c0c-4b52-8197-672d0636290b" (UID: "4f292536-2c0c-4b52-8197-672d0636290b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:03.093018 master-0 kubenswrapper[38936]: I0216 21:37:03.090775 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f292536-2c0c-4b52-8197-672d0636290b-kube-api-access-zr9t2" (OuterVolumeSpecName: "kube-api-access-zr9t2") pod "4f292536-2c0c-4b52-8197-672d0636290b" (UID: "4f292536-2c0c-4b52-8197-672d0636290b"). InnerVolumeSpecName "kube-api-access-zr9t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:03.175251 master-0 kubenswrapper[38936]: I0216 21:37:03.175168 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f292536-2c0c-4b52-8197-672d0636290b-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:03.175251 master-0 kubenswrapper[38936]: I0216 21:37:03.175251 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr9t2\" (UniqueName: \"kubernetes.io/projected/4f292536-2c0c-4b52-8197-672d0636290b-kube-api-access-zr9t2\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:03.244099 master-0 kubenswrapper[38936]: I0216 21:37:03.244033 38936 generic.go:334] "Generic (PLEG): container finished" podID="5e22cd12-ef73-470e-9543-b328a46c9c0d" containerID="9c08e4b6d561a8da44a081325a75ca08b10c3a2bbc3469ff33f0fa9bc6532c38" exitCode=0 Feb 16 21:37:03.244276 master-0 kubenswrapper[38936]: I0216 21:37:03.244114 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c2ba-account-create-update-x7f7j" event={"ID":"5e22cd12-ef73-470e-9543-b328a46c9c0d","Type":"ContainerDied","Data":"9c08e4b6d561a8da44a081325a75ca08b10c3a2bbc3469ff33f0fa9bc6532c38"} Feb 16 21:37:03.251406 master-0 kubenswrapper[38936]: I0216 21:37:03.247667 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zr5cs-config-rqj7w" event={"ID":"4f292536-2c0c-4b52-8197-672d0636290b","Type":"ContainerDied","Data":"e9163c1698ead9b3c1644f1c45bddbe251b1ee603c84f28b68fa32cbdca1da52"} Feb 16 21:37:03.251406 master-0 kubenswrapper[38936]: I0216 21:37:03.247721 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9163c1698ead9b3c1644f1c45bddbe251b1ee603c84f28b68fa32cbdca1da52" Feb 16 21:37:03.251406 master-0 kubenswrapper[38936]: I0216 21:37:03.247794 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zr5cs-config-rqj7w" Feb 16 21:37:03.259542 master-0 kubenswrapper[38936]: I0216 21:37:03.259485 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d15-account-create-update-lldsm" event={"ID":"34fa09a5-23a7-4aea-946f-1005774cd8b8","Type":"ContainerStarted","Data":"b4f77e6c1b2879d4cd0bab53bb0254b5520899fab30e9019e4982089236000f7"} Feb 16 21:37:03.262964 master-0 kubenswrapper[38936]: I0216 21:37:03.262783 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vprb4" event={"ID":"03eeee4e-9496-45a9-a3f8-d3a300085c91","Type":"ContainerStarted","Data":"0b1edbc91564286a1779a558779691f0c60f02f5e23a981db5ca41e3806b1d03"} Feb 16 21:37:03.268454 master-0 kubenswrapper[38936]: I0216 21:37:03.266793 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m4b9n" event={"ID":"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f","Type":"ContainerStarted","Data":"4b1b868394a559e1c86142a2d27523557c1e8fe90dc0b22ff3896ff866cd7ace"} Feb 16 21:37:03.270665 master-0 kubenswrapper[38936]: I0216 21:37:03.269466 38936 generic.go:334] "Generic (PLEG): container finished" podID="c5598012-498e-44d4-9bb0-8d5aadef2f5b" containerID="64e1087a03e645001355b579d504384b592bad4233f263992828ae7fadb08054" exitCode=0 Feb 16 21:37:03.270665 master-0 kubenswrapper[38936]: I0216 21:37:03.269509 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gkccd" event={"ID":"c5598012-498e-44d4-9bb0-8d5aadef2f5b","Type":"ContainerDied","Data":"64e1087a03e645001355b579d504384b592bad4233f263992828ae7fadb08054"} Feb 16 21:37:03.309803 master-0 kubenswrapper[38936]: I0216 21:37:03.308313 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5d15-account-create-update-lldsm" podStartSLOduration=2.30828909 podStartE2EDuration="2.30828909s" podCreationTimestamp="2026-02-16 21:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:03.308172167 +0000 UTC m=+853.660175529" watchObservedRunningTime="2026-02-16 21:37:03.30828909 +0000 UTC m=+853.660292452" Feb 16 21:37:03.336199 master-0 kubenswrapper[38936]: I0216 21:37:03.336105 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-m4b9n" podStartSLOduration=2.336075251 podStartE2EDuration="2.336075251s" podCreationTimestamp="2026-02-16 21:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:03.331068715 +0000 UTC m=+853.683072077" watchObservedRunningTime="2026-02-16 21:37:03.336075251 +0000 UTC m=+853.688078613" Feb 16 21:37:04.192036 master-0 kubenswrapper[38936]: I0216 21:37:04.191943 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zr5cs-config-rqj7w"] Feb 16 21:37:04.210484 master-0 kubenswrapper[38936]: I0216 21:37:04.210357 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zr5cs-config-rqj7w"] Feb 16 21:37:04.283319 master-0 kubenswrapper[38936]: I0216 21:37:04.283246 38936 generic.go:334] "Generic (PLEG): container finished" podID="a6a2a6fa-653f-4e21-a4b5-09bed56ec48f" containerID="a04846ba86f6caa45dffd789b062879a0625815568934b5a17569f8eb85e7140" exitCode=0 Feb 16 21:37:04.283567 master-0 kubenswrapper[38936]: I0216 21:37:04.283340 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m4b9n" event={"ID":"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f","Type":"ContainerDied","Data":"a04846ba86f6caa45dffd789b062879a0625815568934b5a17569f8eb85e7140"} Feb 16 21:37:04.285701 master-0 kubenswrapper[38936]: I0216 21:37:04.285633 38936 generic.go:334] "Generic (PLEG): container finished" podID="34fa09a5-23a7-4aea-946f-1005774cd8b8" containerID="1ceb6c046b9157abed5facfbbce86e63f2227e4daf0dcec9f1876c743b9e0311" exitCode=0 Feb 16 21:37:04.285919 master-0 kubenswrapper[38936]: I0216 21:37:04.285885 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d15-account-create-update-lldsm" event={"ID":"34fa09a5-23a7-4aea-946f-1005774cd8b8","Type":"ContainerDied","Data":"1ceb6c046b9157abed5facfbbce86e63f2227e4daf0dcec9f1876c743b9e0311"} Feb 16 21:37:04.869381 master-0 kubenswrapper[38936]: I0216 21:37:04.868817 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:37:04.884258 master-0 kubenswrapper[38936]: I0216 21:37:04.884219 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c2ba-account-create-update-x7f7j" Feb 16 21:37:04.891587 master-0 kubenswrapper[38936]: I0216 21:37:04.891529 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gkccd" Feb 16 21:37:04.950205 master-0 kubenswrapper[38936]: I0216 21:37:04.950149 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5598012-498e-44d4-9bb0-8d5aadef2f5b-operator-scripts\") pod \"c5598012-498e-44d4-9bb0-8d5aadef2f5b\" (UID: \"c5598012-498e-44d4-9bb0-8d5aadef2f5b\") " Feb 16 21:37:04.953635 master-0 kubenswrapper[38936]: I0216 21:37:04.953562 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5598012-498e-44d4-9bb0-8d5aadef2f5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5598012-498e-44d4-9bb0-8d5aadef2f5b" (UID: "c5598012-498e-44d4-9bb0-8d5aadef2f5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:04.954224 master-0 kubenswrapper[38936]: I0216 21:37:04.954182 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvnth\" (UniqueName: \"kubernetes.io/projected/c5598012-498e-44d4-9bb0-8d5aadef2f5b-kube-api-access-dvnth\") pod \"c5598012-498e-44d4-9bb0-8d5aadef2f5b\" (UID: \"c5598012-498e-44d4-9bb0-8d5aadef2f5b\") " Feb 16 21:37:04.954418 master-0 kubenswrapper[38936]: I0216 21:37:04.954359 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqw4b\" (UniqueName: \"kubernetes.io/projected/5e22cd12-ef73-470e-9543-b328a46c9c0d-kube-api-access-bqw4b\") pod \"5e22cd12-ef73-470e-9543-b328a46c9c0d\" (UID: \"5e22cd12-ef73-470e-9543-b328a46c9c0d\") " Feb 16 21:37:04.954473 master-0 kubenswrapper[38936]: I0216 21:37:04.954422 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22cd12-ef73-470e-9543-b328a46c9c0d-operator-scripts\") pod \"5e22cd12-ef73-470e-9543-b328a46c9c0d\" (UID: \"5e22cd12-ef73-470e-9543-b328a46c9c0d\") " Feb 16 21:37:04.958081 master-0 kubenswrapper[38936]: I0216 21:37:04.956510 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5598012-498e-44d4-9bb0-8d5aadef2f5b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:04.958081 master-0 kubenswrapper[38936]: I0216 21:37:04.956661 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e22cd12-ef73-470e-9543-b328a46c9c0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e22cd12-ef73-470e-9543-b328a46c9c0d" (UID: "5e22cd12-ef73-470e-9543-b328a46c9c0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:04.968040 master-0 kubenswrapper[38936]: I0216 21:37:04.967694 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-n7glt"] Feb 16 21:37:04.968909 master-0 kubenswrapper[38936]: I0216 21:37:04.968416 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" podUID="3037cb65-febb-4854-a8ae-8f8c182a3e64" containerName="dnsmasq-dns" containerID="cri-o://a668323f554aed8085c55ed8673b6bf216417bcd1d5031adc649c3ed6ef28132" gracePeriod=10 Feb 16 21:37:04.975930 master-0 kubenswrapper[38936]: I0216 21:37:04.970597 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e22cd12-ef73-470e-9543-b328a46c9c0d-kube-api-access-bqw4b" (OuterVolumeSpecName: "kube-api-access-bqw4b") pod "5e22cd12-ef73-470e-9543-b328a46c9c0d" (UID: "5e22cd12-ef73-470e-9543-b328a46c9c0d"). InnerVolumeSpecName "kube-api-access-bqw4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:04.975930 master-0 kubenswrapper[38936]: I0216 21:37:04.972908 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5598012-498e-44d4-9bb0-8d5aadef2f5b-kube-api-access-dvnth" (OuterVolumeSpecName: "kube-api-access-dvnth") pod "c5598012-498e-44d4-9bb0-8d5aadef2f5b" (UID: "c5598012-498e-44d4-9bb0-8d5aadef2f5b"). InnerVolumeSpecName "kube-api-access-dvnth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:05.059496 master-0 kubenswrapper[38936]: I0216 21:37:05.059423 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvnth\" (UniqueName: \"kubernetes.io/projected/c5598012-498e-44d4-9bb0-8d5aadef2f5b-kube-api-access-dvnth\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:05.059913 master-0 kubenswrapper[38936]: I0216 21:37:05.059884 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqw4b\" (UniqueName: \"kubernetes.io/projected/5e22cd12-ef73-470e-9543-b328a46c9c0d-kube-api-access-bqw4b\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:05.060084 master-0 kubenswrapper[38936]: I0216 21:37:05.059913 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22cd12-ef73-470e-9543-b328a46c9c0d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:05.305237 master-0 kubenswrapper[38936]: I0216 21:37:05.305165 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-gkccd" event={"ID":"c5598012-498e-44d4-9bb0-8d5aadef2f5b","Type":"ContainerDied","Data":"2a9fb7d89f1d219c081c7ab32007c1ff2bc46476a8b01d1d081809b0cf0a3195"} Feb 16 21:37:05.305237 master-0 kubenswrapper[38936]: I0216 21:37:05.305241 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a9fb7d89f1d219c081c7ab32007c1ff2bc46476a8b01d1d081809b0cf0a3195" Feb 16 21:37:05.307074 master-0 kubenswrapper[38936]: I0216 21:37:05.306981 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-gkccd" Feb 16 21:37:05.323421 master-0 kubenswrapper[38936]: I0216 21:37:05.323346 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c2ba-account-create-update-x7f7j" event={"ID":"5e22cd12-ef73-470e-9543-b328a46c9c0d","Type":"ContainerDied","Data":"97a7466cb939b119dcefeaf470668041257a47600a121da4ebce9715721f9a4b"} Feb 16 21:37:05.323421 master-0 kubenswrapper[38936]: I0216 21:37:05.323418 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97a7466cb939b119dcefeaf470668041257a47600a121da4ebce9715721f9a4b" Feb 16 21:37:05.323686 master-0 kubenswrapper[38936]: I0216 21:37:05.323504 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c2ba-account-create-update-x7f7j" Feb 16 21:37:05.359516 master-0 kubenswrapper[38936]: I0216 21:37:05.341521 38936 generic.go:334] "Generic (PLEG): container finished" podID="3037cb65-febb-4854-a8ae-8f8c182a3e64" containerID="a668323f554aed8085c55ed8673b6bf216417bcd1d5031adc649c3ed6ef28132" exitCode=0 Feb 16 21:37:05.359516 master-0 kubenswrapper[38936]: I0216 21:37:05.341783 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" event={"ID":"3037cb65-febb-4854-a8ae-8f8c182a3e64","Type":"ContainerDied","Data":"a668323f554aed8085c55ed8673b6bf216417bcd1d5031adc649c3ed6ef28132"} Feb 16 21:37:05.892100 master-0 kubenswrapper[38936]: I0216 21:37:05.892048 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f292536-2c0c-4b52-8197-672d0636290b" path="/var/lib/kubelet/pods/4f292536-2c0c-4b52-8197-672d0636290b/volumes" Feb 16 21:37:08.280357 master-0 kubenswrapper[38936]: I0216 21:37:08.280190 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:37:08.286931 master-0 kubenswrapper[38936]: I0216 21:37:08.286884 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d15-account-create-update-lldsm" Feb 16 21:37:08.292806 master-0 kubenswrapper[38936]: I0216 21:37:08.292744 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m4b9n" Feb 16 21:37:08.342901 master-0 kubenswrapper[38936]: I0216 21:37:08.342835 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-dns-svc\") pod \"3037cb65-febb-4854-a8ae-8f8c182a3e64\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " Feb 16 21:37:08.342901 master-0 kubenswrapper[38936]: I0216 21:37:08.342906 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-operator-scripts\") pod \"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f\" (UID: \"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f\") " Feb 16 21:37:08.343239 master-0 kubenswrapper[38936]: I0216 21:37:08.342957 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v85g7\" (UniqueName: \"kubernetes.io/projected/3037cb65-febb-4854-a8ae-8f8c182a3e64-kube-api-access-v85g7\") pod \"3037cb65-febb-4854-a8ae-8f8c182a3e64\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " Feb 16 21:37:08.343239 master-0 kubenswrapper[38936]: I0216 21:37:08.342984 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34fa09a5-23a7-4aea-946f-1005774cd8b8-operator-scripts\") pod \"34fa09a5-23a7-4aea-946f-1005774cd8b8\" (UID: \"34fa09a5-23a7-4aea-946f-1005774cd8b8\") " Feb 16 21:37:08.343239 master-0 kubenswrapper[38936]: I0216 21:37:08.343007 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzql5\" (UniqueName: \"kubernetes.io/projected/34fa09a5-23a7-4aea-946f-1005774cd8b8-kube-api-access-mzql5\") pod \"34fa09a5-23a7-4aea-946f-1005774cd8b8\" (UID: \"34fa09a5-23a7-4aea-946f-1005774cd8b8\") " Feb 16 21:37:08.343239 master-0 kubenswrapper[38936]: I0216 21:37:08.343158 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-config\") pod \"3037cb65-febb-4854-a8ae-8f8c182a3e64\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " Feb 16 21:37:08.343239 master-0 kubenswrapper[38936]: I0216 21:37:08.343178 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4n8\" (UniqueName: \"kubernetes.io/projected/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-kube-api-access-6g4n8\") pod \"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f\" (UID: \"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f\") " Feb 16 21:37:08.343239 master-0 kubenswrapper[38936]: I0216 21:37:08.343233 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-sb\") pod \"3037cb65-febb-4854-a8ae-8f8c182a3e64\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " Feb 16 21:37:08.343501 master-0 kubenswrapper[38936]: I0216 21:37:08.343306 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-nb\") pod \"3037cb65-febb-4854-a8ae-8f8c182a3e64\" (UID: \"3037cb65-febb-4854-a8ae-8f8c182a3e64\") " Feb 16 21:37:08.344551 master-0 kubenswrapper[38936]: I0216 21:37:08.344290 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6a2a6fa-653f-4e21-a4b5-09bed56ec48f" (UID: "a6a2a6fa-653f-4e21-a4b5-09bed56ec48f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:08.344551 master-0 kubenswrapper[38936]: I0216 21:37:08.344314 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34fa09a5-23a7-4aea-946f-1005774cd8b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34fa09a5-23a7-4aea-946f-1005774cd8b8" (UID: "34fa09a5-23a7-4aea-946f-1005774cd8b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:08.347333 master-0 kubenswrapper[38936]: I0216 21:37:08.347201 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-kube-api-access-6g4n8" (OuterVolumeSpecName: "kube-api-access-6g4n8") pod "a6a2a6fa-653f-4e21-a4b5-09bed56ec48f" (UID: "a6a2a6fa-653f-4e21-a4b5-09bed56ec48f"). InnerVolumeSpecName "kube-api-access-6g4n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:08.348721 master-0 kubenswrapper[38936]: I0216 21:37:08.348618 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3037cb65-febb-4854-a8ae-8f8c182a3e64-kube-api-access-v85g7" (OuterVolumeSpecName: "kube-api-access-v85g7") pod "3037cb65-febb-4854-a8ae-8f8c182a3e64" (UID: "3037cb65-febb-4854-a8ae-8f8c182a3e64"). InnerVolumeSpecName "kube-api-access-v85g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:08.349333 master-0 kubenswrapper[38936]: I0216 21:37:08.349282 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fa09a5-23a7-4aea-946f-1005774cd8b8-kube-api-access-mzql5" (OuterVolumeSpecName: "kube-api-access-mzql5") pod "34fa09a5-23a7-4aea-946f-1005774cd8b8" (UID: "34fa09a5-23a7-4aea-946f-1005774cd8b8"). InnerVolumeSpecName "kube-api-access-mzql5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:08.396811 master-0 kubenswrapper[38936]: I0216 21:37:08.396738 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d15-account-create-update-lldsm" event={"ID":"34fa09a5-23a7-4aea-946f-1005774cd8b8","Type":"ContainerDied","Data":"b4f77e6c1b2879d4cd0bab53bb0254b5520899fab30e9019e4982089236000f7"} Feb 16 21:37:08.396962 master-0 kubenswrapper[38936]: I0216 21:37:08.396815 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f77e6c1b2879d4cd0bab53bb0254b5520899fab30e9019e4982089236000f7" Feb 16 21:37:08.396962 master-0 kubenswrapper[38936]: I0216 21:37:08.396913 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d15-account-create-update-lldsm" Feb 16 21:37:08.404150 master-0 kubenswrapper[38936]: I0216 21:37:08.403951 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" event={"ID":"3037cb65-febb-4854-a8ae-8f8c182a3e64","Type":"ContainerDied","Data":"9592c1279109e2f05ec25603cd9717bd15e0ce40c33de3bd76aa7b6cdf1a9c76"} Feb 16 21:37:08.404150 master-0 kubenswrapper[38936]: I0216 21:37:08.403986 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" Feb 16 21:37:08.404150 master-0 kubenswrapper[38936]: I0216 21:37:08.404030 38936 scope.go:117] "RemoveContainer" containerID="a668323f554aed8085c55ed8673b6bf216417bcd1d5031adc649c3ed6ef28132" Feb 16 21:37:08.408452 master-0 kubenswrapper[38936]: I0216 21:37:08.408399 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3037cb65-febb-4854-a8ae-8f8c182a3e64" (UID: "3037cb65-febb-4854-a8ae-8f8c182a3e64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:08.409299 master-0 kubenswrapper[38936]: I0216 21:37:08.409258 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3037cb65-febb-4854-a8ae-8f8c182a3e64" (UID: "3037cb65-febb-4854-a8ae-8f8c182a3e64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:08.410261 master-0 kubenswrapper[38936]: I0216 21:37:08.410057 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m4b9n" Feb 16 21:37:08.410261 master-0 kubenswrapper[38936]: I0216 21:37:08.410055 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m4b9n" event={"ID":"a6a2a6fa-653f-4e21-a4b5-09bed56ec48f","Type":"ContainerDied","Data":"4b1b868394a559e1c86142a2d27523557c1e8fe90dc0b22ff3896ff866cd7ace"} Feb 16 21:37:08.410261 master-0 kubenswrapper[38936]: I0216 21:37:08.410222 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b1b868394a559e1c86142a2d27523557c1e8fe90dc0b22ff3896ff866cd7ace" Feb 16 21:37:08.414171 master-0 kubenswrapper[38936]: I0216 21:37:08.414066 38936 generic.go:334] "Generic (PLEG): container finished" podID="4bb312b5-f96b-4689-9d30-c4c878aae0ec" containerID="1a03fb329b79a652f93bc0d8cf6903fee3b46a1c62d4b68c751e859b6f865732" exitCode=0 Feb 16 21:37:08.414171 master-0 kubenswrapper[38936]: I0216 21:37:08.414096 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3037cb65-febb-4854-a8ae-8f8c182a3e64" (UID: "3037cb65-febb-4854-a8ae-8f8c182a3e64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:08.414171 master-0 kubenswrapper[38936]: I0216 21:37:08.414114 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hfz86" event={"ID":"4bb312b5-f96b-4689-9d30-c4c878aae0ec","Type":"ContainerDied","Data":"1a03fb329b79a652f93bc0d8cf6903fee3b46a1c62d4b68c751e859b6f865732"} Feb 16 21:37:08.421762 master-0 kubenswrapper[38936]: I0216 21:37:08.421717 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-config" (OuterVolumeSpecName: "config") pod "3037cb65-febb-4854-a8ae-8f8c182a3e64" (UID: "3037cb65-febb-4854-a8ae-8f8c182a3e64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:08.446056 master-0 kubenswrapper[38936]: I0216 21:37:08.446023 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:08.446129 master-0 kubenswrapper[38936]: I0216 21:37:08.446060 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g4n8\" (UniqueName: \"kubernetes.io/projected/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-kube-api-access-6g4n8\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:08.446129 master-0 kubenswrapper[38936]: I0216 21:37:08.446078 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:08.446129 master-0 kubenswrapper[38936]: I0216 21:37:08.446092 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:08.446129 master-0 kubenswrapper[38936]: I0216 21:37:08.446105 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3037cb65-febb-4854-a8ae-8f8c182a3e64-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:08.446129 master-0 kubenswrapper[38936]: I0216 21:37:08.446118 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:08.446306 master-0 kubenswrapper[38936]: I0216 21:37:08.446132 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v85g7\" (UniqueName: \"kubernetes.io/projected/3037cb65-febb-4854-a8ae-8f8c182a3e64-kube-api-access-v85g7\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:08.446306 master-0 kubenswrapper[38936]: I0216 21:37:08.446145 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34fa09a5-23a7-4aea-946f-1005774cd8b8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:08.446306 master-0 kubenswrapper[38936]: I0216 21:37:08.446157 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzql5\" (UniqueName: \"kubernetes.io/projected/34fa09a5-23a7-4aea-946f-1005774cd8b8-kube-api-access-mzql5\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:08.508441 master-0 kubenswrapper[38936]: I0216 21:37:08.504577 38936 scope.go:117] "RemoveContainer" containerID="a43ee4e65c0c547597917c1e95598d3467bcbdd3d990805c34b303caa3bf5378" Feb 16 21:37:08.758706 master-0 kubenswrapper[38936]: I0216 21:37:08.757843 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-n7glt"] Feb 16 21:37:08.789958 master-0 kubenswrapper[38936]: I0216 21:37:08.789880 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-n7glt"] Feb 16 21:37:09.425285 master-0 kubenswrapper[38936]: I0216 21:37:09.425211 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vprb4" event={"ID":"03eeee4e-9496-45a9-a3f8-d3a300085c91","Type":"ContainerStarted","Data":"d43ab0666a1f12b56a6a3267f24f5b65cb897417e46b4f0d42502fcef3ddd04a"} Feb 16 21:37:09.453805 master-0 kubenswrapper[38936]: I0216 21:37:09.453701 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vprb4" podStartSLOduration=2.179379728 podStartE2EDuration="8.453678941s" podCreationTimestamp="2026-02-16 21:37:01 +0000 UTC" firstStartedPulling="2026-02-16 21:37:02.316931301 +0000 UTC m=+852.668934663" lastFinishedPulling="2026-02-16 21:37:08.591230514 +0000 UTC m=+858.943233876" observedRunningTime="2026-02-16 21:37:09.444138943 +0000 UTC m=+859.796142315" watchObservedRunningTime="2026-02-16 21:37:09.453678941 +0000 UTC m=+859.805682303" Feb 16 21:37:09.894176 master-0 kubenswrapper[38936]: I0216 21:37:09.894110 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3037cb65-febb-4854-a8ae-8f8c182a3e64" path="/var/lib/kubelet/pods/3037cb65-febb-4854-a8ae-8f8c182a3e64/volumes" Feb 16 21:37:10.025797 master-0 kubenswrapper[38936]: I0216 21:37:10.025675 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hfz86" Feb 16 21:37:10.106618 master-0 kubenswrapper[38936]: I0216 21:37:10.106541 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhtsm\" (UniqueName: \"kubernetes.io/projected/4bb312b5-f96b-4689-9d30-c4c878aae0ec-kube-api-access-bhtsm\") pod \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " Feb 16 21:37:10.106888 master-0 kubenswrapper[38936]: I0216 21:37:10.106689 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-combined-ca-bundle\") pod \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " Feb 16 21:37:10.106954 master-0 kubenswrapper[38936]: I0216 21:37:10.106908 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-config-data\") pod \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " Feb 16 21:37:10.107099 master-0 kubenswrapper[38936]: I0216 21:37:10.107069 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-db-sync-config-data\") pod \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\" (UID: \"4bb312b5-f96b-4689-9d30-c4c878aae0ec\") " Feb 16 21:37:10.110779 master-0 kubenswrapper[38936]: I0216 21:37:10.110729 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb312b5-f96b-4689-9d30-c4c878aae0ec-kube-api-access-bhtsm" (OuterVolumeSpecName: "kube-api-access-bhtsm") pod "4bb312b5-f96b-4689-9d30-c4c878aae0ec" (UID: "4bb312b5-f96b-4689-9d30-c4c878aae0ec"). InnerVolumeSpecName "kube-api-access-bhtsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:10.111197 master-0 kubenswrapper[38936]: I0216 21:37:10.111155 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4bb312b5-f96b-4689-9d30-c4c878aae0ec" (UID: "4bb312b5-f96b-4689-9d30-c4c878aae0ec"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:10.141824 master-0 kubenswrapper[38936]: I0216 21:37:10.141761 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bb312b5-f96b-4689-9d30-c4c878aae0ec" (UID: "4bb312b5-f96b-4689-9d30-c4c878aae0ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:10.158619 master-0 kubenswrapper[38936]: I0216 21:37:10.158566 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-config-data" (OuterVolumeSpecName: "config-data") pod "4bb312b5-f96b-4689-9d30-c4c878aae0ec" (UID: "4bb312b5-f96b-4689-9d30-c4c878aae0ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:10.209673 master-0 kubenswrapper[38936]: I0216 21:37:10.209606 38936 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:10.209673 master-0 kubenswrapper[38936]: I0216 21:37:10.209672 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhtsm\" (UniqueName: \"kubernetes.io/projected/4bb312b5-f96b-4689-9d30-c4c878aae0ec-kube-api-access-bhtsm\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:10.209673 master-0 kubenswrapper[38936]: I0216 21:37:10.209684 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:10.210120 master-0 kubenswrapper[38936]: I0216 21:37:10.209694 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bb312b5-f96b-4689-9d30-c4c878aae0ec-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:10.442559 master-0 kubenswrapper[38936]: I0216 21:37:10.442459 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hfz86" event={"ID":"4bb312b5-f96b-4689-9d30-c4c878aae0ec","Type":"ContainerDied","Data":"8a200353739402babbd860da2ee26e01658b6b58d49556b186b9f52e9ed4596b"} Feb 16 21:37:10.442559 master-0 kubenswrapper[38936]: I0216 21:37:10.442496 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hfz86" Feb 16 21:37:10.443298 master-0 kubenswrapper[38936]: I0216 21:37:10.442520 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a200353739402babbd860da2ee26e01658b6b58d49556b186b9f52e9ed4596b" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: I0216 21:37:11.089619 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb89595f5-b5ncl"] Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: E0216 21:37:11.090192 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a2a6fa-653f-4e21-a4b5-09bed56ec48f" containerName="mariadb-database-create" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: I0216 21:37:11.090206 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a2a6fa-653f-4e21-a4b5-09bed56ec48f" containerName="mariadb-database-create" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: E0216 21:37:11.090232 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3037cb65-febb-4854-a8ae-8f8c182a3e64" containerName="init" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: I0216 21:37:11.090239 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="3037cb65-febb-4854-a8ae-8f8c182a3e64" containerName="init" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: E0216 21:37:11.090269 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f292536-2c0c-4b52-8197-672d0636290b" containerName="ovn-config" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: I0216 21:37:11.090276 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f292536-2c0c-4b52-8197-672d0636290b" containerName="ovn-config" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: E0216 21:37:11.090292 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e22cd12-ef73-470e-9543-b328a46c9c0d" containerName="mariadb-account-create-update" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: I0216 21:37:11.090300 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e22cd12-ef73-470e-9543-b328a46c9c0d" containerName="mariadb-account-create-update" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: E0216 21:37:11.090310 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb312b5-f96b-4689-9d30-c4c878aae0ec" containerName="glance-db-sync" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: I0216 21:37:11.090316 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb312b5-f96b-4689-9d30-c4c878aae0ec" containerName="glance-db-sync" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: E0216 21:37:11.090339 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fa09a5-23a7-4aea-946f-1005774cd8b8" containerName="mariadb-account-create-update" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: I0216 21:37:11.090345 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fa09a5-23a7-4aea-946f-1005774cd8b8" containerName="mariadb-account-create-update" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: E0216 21:37:11.090357 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3037cb65-febb-4854-a8ae-8f8c182a3e64" containerName="dnsmasq-dns" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: I0216 21:37:11.090363 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="3037cb65-febb-4854-a8ae-8f8c182a3e64" containerName="dnsmasq-dns" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: E0216 21:37:11.090380 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5598012-498e-44d4-9bb0-8d5aadef2f5b" containerName="mariadb-database-create" Feb 16 21:37:11.090538 master-0 kubenswrapper[38936]: I0216 21:37:11.090387 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5598012-498e-44d4-9bb0-8d5aadef2f5b" containerName="mariadb-database-create" Feb 16 21:37:11.091461 master-0 kubenswrapper[38936]: I0216 21:37:11.090578 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb312b5-f96b-4689-9d30-c4c878aae0ec" containerName="glance-db-sync" Feb 16 21:37:11.091461 master-0 kubenswrapper[38936]: I0216 21:37:11.090598 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="3037cb65-febb-4854-a8ae-8f8c182a3e64" containerName="dnsmasq-dns" Feb 16 21:37:11.091461 master-0 kubenswrapper[38936]: I0216 21:37:11.090610 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f292536-2c0c-4b52-8197-672d0636290b" containerName="ovn-config" Feb 16 21:37:11.091461 master-0 kubenswrapper[38936]: I0216 21:37:11.090621 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5598012-498e-44d4-9bb0-8d5aadef2f5b" containerName="mariadb-database-create" Feb 16 21:37:11.091461 master-0 kubenswrapper[38936]: I0216 21:37:11.090633 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6a2a6fa-653f-4e21-a4b5-09bed56ec48f" containerName="mariadb-database-create" Feb 16 21:37:11.091461 master-0 kubenswrapper[38936]: I0216 21:37:11.090679 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e22cd12-ef73-470e-9543-b328a46c9c0d" containerName="mariadb-account-create-update" Feb 16 21:37:11.091461 master-0 kubenswrapper[38936]: I0216 21:37:11.090700 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fa09a5-23a7-4aea-946f-1005774cd8b8" containerName="mariadb-account-create-update" Feb 16 21:37:11.091905 master-0 kubenswrapper[38936]: I0216 21:37:11.091877 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.127084 master-0 kubenswrapper[38936]: I0216 21:37:11.125811 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb89595f5-b5ncl"] Feb 16 21:37:11.135975 master-0 kubenswrapper[38936]: I0216 21:37:11.135903 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-nb\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.136394 master-0 kubenswrapper[38936]: I0216 21:37:11.136326 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82nz\" (UniqueName: \"kubernetes.io/projected/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-kube-api-access-r82nz\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.136394 master-0 kubenswrapper[38936]: I0216 21:37:11.136354 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-swift-storage-0\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.136678 master-0 kubenswrapper[38936]: I0216 21:37:11.136585 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-config\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.137006 master-0 kubenswrapper[38936]: I0216 21:37:11.136963 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-svc\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.137210 master-0 kubenswrapper[38936]: I0216 21:37:11.137120 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-sb\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.240727 master-0 kubenswrapper[38936]: I0216 21:37:11.238591 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-nb\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.240727 master-0 kubenswrapper[38936]: I0216 21:37:11.238689 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r82nz\" (UniqueName: \"kubernetes.io/projected/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-kube-api-access-r82nz\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.240727 master-0 kubenswrapper[38936]: I0216 21:37:11.238724 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-swift-storage-0\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.240727 master-0 kubenswrapper[38936]: I0216 21:37:11.238889 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-config\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.240727 master-0 kubenswrapper[38936]: I0216 21:37:11.239075 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-svc\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.240727 master-0 kubenswrapper[38936]: I0216 21:37:11.239177 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-sb\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.240727 master-0 kubenswrapper[38936]: I0216 21:37:11.239833 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-nb\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.240727 master-0 kubenswrapper[38936]: I0216 21:37:11.239892 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-swift-storage-0\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.240727 master-0 kubenswrapper[38936]: I0216 21:37:11.239937 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-config\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.240727 master-0 kubenswrapper[38936]: I0216 21:37:11.239993 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-sb\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.240727 master-0 kubenswrapper[38936]: I0216 21:37:11.240149 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-svc\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.260615 master-0 kubenswrapper[38936]: I0216 21:37:11.260550 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r82nz\" (UniqueName: \"kubernetes.io/projected/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-kube-api-access-r82nz\") pod \"dnsmasq-dns-7cb89595f5-b5ncl\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.437384 master-0 kubenswrapper[38936]: I0216 21:37:11.436265 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:11.927320 master-0 kubenswrapper[38936]: I0216 21:37:11.927233 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb89595f5-b5ncl"] Feb 16 21:37:11.928061 master-0 kubenswrapper[38936]: W0216 21:37:11.928002 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2af8c69_6de4_46fb_a872_e2ae1ad49f8b.slice/crio-000ffcb42cd563c3dae770d85a3a3e89ea1ca23b42af8231eab4e4bc4c4ea584 WatchSource:0}: Error finding container 000ffcb42cd563c3dae770d85a3a3e89ea1ca23b42af8231eab4e4bc4c4ea584: Status 404 returned error can't find the container with id 000ffcb42cd563c3dae770d85a3a3e89ea1ca23b42af8231eab4e4bc4c4ea584 Feb 16 21:37:12.471878 master-0 kubenswrapper[38936]: I0216 21:37:12.471745 38936 generic.go:334] "Generic (PLEG): container finished" podID="d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" containerID="09880d4698f98cc84198ccc4b5923a4a260180e7ab9f0b56da8ba5b832a4e029" exitCode=0 Feb 16 21:37:12.471878 master-0 kubenswrapper[38936]: I0216 21:37:12.471805 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" event={"ID":"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b","Type":"ContainerDied","Data":"09880d4698f98cc84198ccc4b5923a4a260180e7ab9f0b56da8ba5b832a4e029"} Feb 16 21:37:12.471878 master-0 kubenswrapper[38936]: I0216 21:37:12.471853 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" event={"ID":"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b","Type":"ContainerStarted","Data":"000ffcb42cd563c3dae770d85a3a3e89ea1ca23b42af8231eab4e4bc4c4ea584"} Feb 16 21:37:12.728941 master-0 kubenswrapper[38936]: I0216 21:37:12.726481 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6fd49994df-n7glt" podUID="3037cb65-febb-4854-a8ae-8f8c182a3e64" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.184:5353: i/o timeout" Feb 16 21:37:13.485778 master-0 kubenswrapper[38936]: I0216 21:37:13.485703 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" event={"ID":"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b","Type":"ContainerStarted","Data":"77295fe064343a18f81f0ab656b1c0c91b10f17176bd4d1a7cd3a31ddc276f1a"} Feb 16 21:37:13.486489 master-0 kubenswrapper[38936]: I0216 21:37:13.485880 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:13.508203 master-0 kubenswrapper[38936]: I0216 21:37:13.508115 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" podStartSLOduration=2.50809353 podStartE2EDuration="2.50809353s" podCreationTimestamp="2026-02-16 21:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:13.502453208 +0000 UTC m=+863.854456570" watchObservedRunningTime="2026-02-16 21:37:13.50809353 +0000 UTC m=+863.860096892" Feb 16 21:37:14.499597 master-0 kubenswrapper[38936]: I0216 21:37:14.499538 38936 generic.go:334] "Generic (PLEG): container finished" podID="03eeee4e-9496-45a9-a3f8-d3a300085c91" containerID="d43ab0666a1f12b56a6a3267f24f5b65cb897417e46b4f0d42502fcef3ddd04a" exitCode=0 Feb 16 21:37:14.500183 master-0 kubenswrapper[38936]: I0216 21:37:14.499677 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vprb4" event={"ID":"03eeee4e-9496-45a9-a3f8-d3a300085c91","Type":"ContainerDied","Data":"d43ab0666a1f12b56a6a3267f24f5b65cb897417e46b4f0d42502fcef3ddd04a"} Feb 16 21:37:15.947776 master-0 kubenswrapper[38936]: I0216 21:37:15.947720 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:16.054087 master-0 kubenswrapper[38936]: I0216 21:37:16.053995 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-combined-ca-bundle\") pod \"03eeee4e-9496-45a9-a3f8-d3a300085c91\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " Feb 16 21:37:16.054332 master-0 kubenswrapper[38936]: I0216 21:37:16.054217 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-config-data\") pod \"03eeee4e-9496-45a9-a3f8-d3a300085c91\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " Feb 16 21:37:16.054332 master-0 kubenswrapper[38936]: I0216 21:37:16.054243 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtpmv\" (UniqueName: \"kubernetes.io/projected/03eeee4e-9496-45a9-a3f8-d3a300085c91-kube-api-access-xtpmv\") pod \"03eeee4e-9496-45a9-a3f8-d3a300085c91\" (UID: \"03eeee4e-9496-45a9-a3f8-d3a300085c91\") " Feb 16 21:37:16.057670 master-0 kubenswrapper[38936]: I0216 21:37:16.057615 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03eeee4e-9496-45a9-a3f8-d3a300085c91-kube-api-access-xtpmv" (OuterVolumeSpecName: "kube-api-access-xtpmv") pod "03eeee4e-9496-45a9-a3f8-d3a300085c91" (UID: "03eeee4e-9496-45a9-a3f8-d3a300085c91"). InnerVolumeSpecName "kube-api-access-xtpmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:16.083468 master-0 kubenswrapper[38936]: I0216 21:37:16.083403 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03eeee4e-9496-45a9-a3f8-d3a300085c91" (UID: "03eeee4e-9496-45a9-a3f8-d3a300085c91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:16.110348 master-0 kubenswrapper[38936]: I0216 21:37:16.110205 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-config-data" (OuterVolumeSpecName: "config-data") pod "03eeee4e-9496-45a9-a3f8-d3a300085c91" (UID: "03eeee4e-9496-45a9-a3f8-d3a300085c91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:16.157489 master-0 kubenswrapper[38936]: I0216 21:37:16.157410 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:16.157489 master-0 kubenswrapper[38936]: I0216 21:37:16.157489 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtpmv\" (UniqueName: \"kubernetes.io/projected/03eeee4e-9496-45a9-a3f8-d3a300085c91-kube-api-access-xtpmv\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:16.157803 master-0 kubenswrapper[38936]: I0216 21:37:16.157518 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03eeee4e-9496-45a9-a3f8-d3a300085c91-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:16.528430 master-0 kubenswrapper[38936]: I0216 21:37:16.528383 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vprb4" event={"ID":"03eeee4e-9496-45a9-a3f8-d3a300085c91","Type":"ContainerDied","Data":"0b1edbc91564286a1779a558779691f0c60f02f5e23a981db5ca41e3806b1d03"} Feb 16 21:37:16.528430 master-0 kubenswrapper[38936]: I0216 21:37:16.528431 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b1edbc91564286a1779a558779691f0c60f02f5e23a981db5ca41e3806b1d03" Feb 16 21:37:16.528737 master-0 kubenswrapper[38936]: I0216 21:37:16.528469 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vprb4" Feb 16 21:37:16.938675 master-0 kubenswrapper[38936]: I0216 21:37:16.934513 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xxk4w"] Feb 16 21:37:16.938675 master-0 kubenswrapper[38936]: E0216 21:37:16.935425 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03eeee4e-9496-45a9-a3f8-d3a300085c91" containerName="keystone-db-sync" Feb 16 21:37:16.938675 master-0 kubenswrapper[38936]: I0216 21:37:16.935463 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="03eeee4e-9496-45a9-a3f8-d3a300085c91" containerName="keystone-db-sync" Feb 16 21:37:16.938675 master-0 kubenswrapper[38936]: I0216 21:37:16.935773 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="03eeee4e-9496-45a9-a3f8-d3a300085c91" containerName="keystone-db-sync" Feb 16 21:37:16.938675 master-0 kubenswrapper[38936]: I0216 21:37:16.936863 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:16.955934 master-0 kubenswrapper[38936]: I0216 21:37:16.949692 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:37:16.955934 master-0 kubenswrapper[38936]: I0216 21:37:16.949769 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 21:37:16.955934 master-0 kubenswrapper[38936]: I0216 21:37:16.950123 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:37:16.955934 master-0 kubenswrapper[38936]: I0216 21:37:16.950270 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:37:16.981013 master-0 kubenswrapper[38936]: I0216 21:37:16.979513 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-fernet-keys\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:16.981013 master-0 kubenswrapper[38936]: I0216 21:37:16.979671 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-scripts\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:16.981013 master-0 kubenswrapper[38936]: I0216 21:37:16.979759 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h86dc\" (UniqueName: \"kubernetes.io/projected/432e676b-bab7-4991-8605-0eed0c6a0b2c-kube-api-access-h86dc\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:16.981013 master-0 kubenswrapper[38936]: I0216 21:37:16.979806 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-combined-ca-bundle\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:16.981013 master-0 kubenswrapper[38936]: I0216 21:37:16.979839 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-credential-keys\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:16.981013 master-0 kubenswrapper[38936]: I0216 21:37:16.979872 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-config-data\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.029474 master-0 kubenswrapper[38936]: I0216 21:37:17.027819 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb89595f5-b5ncl"] Feb 16 21:37:17.029474 master-0 kubenswrapper[38936]: I0216 21:37:17.028527 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" podUID="d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" containerName="dnsmasq-dns" containerID="cri-o://77295fe064343a18f81f0ab656b1c0c91b10f17176bd4d1a7cd3a31ddc276f1a" gracePeriod=10 Feb 16 21:37:17.040575 master-0 kubenswrapper[38936]: I0216 21:37:17.040438 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:17.064989 master-0 kubenswrapper[38936]: I0216 21:37:17.064916 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xxk4w"] Feb 16 21:37:17.081478 master-0 kubenswrapper[38936]: I0216 21:37:17.081388 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-config-data\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.081478 master-0 kubenswrapper[38936]: I0216 21:37:17.081489 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-fernet-keys\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.081805 master-0 kubenswrapper[38936]: I0216 21:37:17.081605 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-scripts\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.081805 master-0 kubenswrapper[38936]: I0216 21:37:17.081725 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h86dc\" (UniqueName: \"kubernetes.io/projected/432e676b-bab7-4991-8605-0eed0c6a0b2c-kube-api-access-h86dc\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.081805 master-0 kubenswrapper[38936]: I0216 21:37:17.081780 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-combined-ca-bundle\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.081948 master-0 kubenswrapper[38936]: I0216 21:37:17.081835 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-credential-keys\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.088003 master-0 kubenswrapper[38936]: I0216 21:37:17.087947 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-credential-keys\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.090608 master-0 kubenswrapper[38936]: I0216 21:37:17.090566 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-scripts\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.090795 master-0 kubenswrapper[38936]: I0216 21:37:17.090690 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-config-data\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.096842 master-0 kubenswrapper[38936]: I0216 21:37:17.095290 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-combined-ca-bundle\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.100136 master-0 kubenswrapper[38936]: I0216 21:37:17.100091 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-fernet-keys\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.123903 master-0 kubenswrapper[38936]: I0216 21:37:17.123169 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77dfb8866c-gv2qv"] Feb 16 21:37:17.131836 master-0 kubenswrapper[38936]: I0216 21:37:17.131458 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.139446 master-0 kubenswrapper[38936]: I0216 21:37:17.139197 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h86dc\" (UniqueName: \"kubernetes.io/projected/432e676b-bab7-4991-8605-0eed0c6a0b2c-kube-api-access-h86dc\") pod \"keystone-bootstrap-xxk4w\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.192095 master-0 kubenswrapper[38936]: I0216 21:37:17.192055 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77dfb8866c-gv2qv"] Feb 16 21:37:17.192186 master-0 kubenswrapper[38936]: I0216 21:37:17.192081 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-nb\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.192246 master-0 kubenswrapper[38936]: I0216 21:37:17.192224 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-config\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.192293 master-0 kubenswrapper[38936]: I0216 21:37:17.192278 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-swift-storage-0\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.192327 master-0 kubenswrapper[38936]: I0216 21:37:17.192301 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-svc\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.192383 master-0 kubenswrapper[38936]: I0216 21:37:17.192365 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5xjs\" (UniqueName: \"kubernetes.io/projected/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-kube-api-access-f5xjs\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.192478 master-0 kubenswrapper[38936]: I0216 21:37:17.192459 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-sb\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.272074 master-0 kubenswrapper[38936]: I0216 21:37:17.271316 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:17.306740 master-0 kubenswrapper[38936]: I0216 21:37:17.297473 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-config\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.306740 master-0 kubenswrapper[38936]: I0216 21:37:17.297531 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-swift-storage-0\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.306740 master-0 kubenswrapper[38936]: I0216 21:37:17.297552 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-svc\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.306740 master-0 kubenswrapper[38936]: I0216 21:37:17.297578 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5xjs\" (UniqueName: \"kubernetes.io/projected/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-kube-api-access-f5xjs\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.306740 master-0 kubenswrapper[38936]: I0216 21:37:17.297628 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-sb\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.306740 master-0 kubenswrapper[38936]: I0216 21:37:17.297724 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-nb\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.306740 master-0 kubenswrapper[38936]: I0216 21:37:17.298760 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-nb\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.306740 master-0 kubenswrapper[38936]: I0216 21:37:17.300523 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-svc\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.306740 master-0 kubenswrapper[38936]: I0216 21:37:17.301194 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-swift-storage-0\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.306740 master-0 kubenswrapper[38936]: I0216 21:37:17.302104 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-sb\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.306740 master-0 kubenswrapper[38936]: I0216 21:37:17.305261 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-config\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.396067 master-0 kubenswrapper[38936]: I0216 21:37:17.389532 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5xjs\" (UniqueName: \"kubernetes.io/projected/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-kube-api-access-f5xjs\") pod \"dnsmasq-dns-77dfb8866c-gv2qv\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.396067 master-0 kubenswrapper[38936]: I0216 21:37:17.389610 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-whl9t"] Feb 16 21:37:17.396067 master-0 kubenswrapper[38936]: I0216 21:37:17.392955 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-whl9t" Feb 16 21:37:17.417325 master-0 kubenswrapper[38936]: I0216 21:37:17.417096 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-znszx"] Feb 16 21:37:17.420689 master-0 kubenswrapper[38936]: I0216 21:37:17.419024 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:17.424690 master-0 kubenswrapper[38936]: I0216 21:37:17.424157 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 21:37:17.424690 master-0 kubenswrapper[38936]: I0216 21:37:17.424445 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 21:37:17.428445 master-0 kubenswrapper[38936]: I0216 21:37:17.428392 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-whl9t"] Feb 16 21:37:17.442328 master-0 kubenswrapper[38936]: I0216 21:37:17.442266 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-09d0-account-create-update-js9dq"] Feb 16 21:37:17.443999 master-0 kubenswrapper[38936]: I0216 21:37:17.443963 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-09d0-account-create-update-js9dq" Feb 16 21:37:17.448788 master-0 kubenswrapper[38936]: I0216 21:37:17.448748 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Feb 16 21:37:17.450083 master-0 kubenswrapper[38936]: I0216 21:37:17.450060 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-znszx"] Feb 16 21:37:17.478680 master-0 kubenswrapper[38936]: I0216 21:37:17.476783 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-09d0-account-create-update-js9dq"] Feb 16 21:37:17.491765 master-0 kubenswrapper[38936]: I0216 21:37:17.488278 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9c692-db-sync-r9pqq"] Feb 16 21:37:17.495819 master-0 kubenswrapper[38936]: I0216 21:37:17.493152 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.516894 master-0 kubenswrapper[38936]: I0216 21:37:17.516815 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptd55\" (UniqueName: \"kubernetes.io/projected/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-kube-api-access-ptd55\") pod \"neutron-db-sync-znszx\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:17.517154 master-0 kubenswrapper[38936]: I0216 21:37:17.516988 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx79d\" (UniqueName: \"kubernetes.io/projected/00b83769-f6e4-4403-aada-d460148bb289-kube-api-access-kx79d\") pod \"ironic-09d0-account-create-update-js9dq\" (UID: \"00b83769-f6e4-4403-aada-d460148bb289\") " pod="openstack/ironic-09d0-account-create-update-js9dq" Feb 16 21:37:17.517154 master-0 kubenswrapper[38936]: I0216 21:37:17.517133 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-config\") pod \"neutron-db-sync-znszx\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:17.517395 master-0 kubenswrapper[38936]: I0216 21:37:17.517369 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8bxs\" (UniqueName: \"kubernetes.io/projected/b54cd98b-c06e-486b-951e-93d5c8477416-kube-api-access-v8bxs\") pod \"ironic-db-create-whl9t\" (UID: \"b54cd98b-c06e-486b-951e-93d5c8477416\") " pod="openstack/ironic-db-create-whl9t" Feb 16 21:37:17.517460 master-0 kubenswrapper[38936]: I0216 21:37:17.517426 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b83769-f6e4-4403-aada-d460148bb289-operator-scripts\") pod \"ironic-09d0-account-create-update-js9dq\" (UID: \"00b83769-f6e4-4403-aada-d460148bb289\") " pod="openstack/ironic-09d0-account-create-update-js9dq" Feb 16 21:37:17.517460 master-0 kubenswrapper[38936]: I0216 21:37:17.517452 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54cd98b-c06e-486b-951e-93d5c8477416-operator-scripts\") pod \"ironic-db-create-whl9t\" (UID: \"b54cd98b-c06e-486b-951e-93d5c8477416\") " pod="openstack/ironic-db-create-whl9t" Feb 16 21:37:17.517566 master-0 kubenswrapper[38936]: I0216 21:37:17.517507 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-combined-ca-bundle\") pod \"neutron-db-sync-znszx\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:17.519955 master-0 kubenswrapper[38936]: I0216 21:37:17.519874 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-config-data" Feb 16 21:37:17.520206 master-0 kubenswrapper[38936]: I0216 21:37:17.520184 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-scripts" Feb 16 21:37:17.523566 master-0 kubenswrapper[38936]: I0216 21:37:17.523511 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-db-sync-r9pqq"] Feb 16 21:37:17.590197 master-0 kubenswrapper[38936]: I0216 21:37:17.590132 38936 generic.go:334] "Generic (PLEG): container finished" podID="d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" containerID="77295fe064343a18f81f0ab656b1c0c91b10f17176bd4d1a7cd3a31ddc276f1a" exitCode=0 Feb 16 21:37:17.590375 master-0 kubenswrapper[38936]: I0216 21:37:17.590334 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" event={"ID":"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b","Type":"ContainerDied","Data":"77295fe064343a18f81f0ab656b1c0c91b10f17176bd4d1a7cd3a31ddc276f1a"} Feb 16 21:37:17.609268 master-0 kubenswrapper[38936]: I0216 21:37:17.609228 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7xpzq"] Feb 16 21:37:17.610897 master-0 kubenswrapper[38936]: I0216 21:37:17.610865 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.611685 master-0 kubenswrapper[38936]: I0216 21:37:17.611618 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:17.620955 master-0 kubenswrapper[38936]: I0216 21:37:17.614788 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 21:37:17.620955 master-0 kubenswrapper[38936]: I0216 21:37:17.614974 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 21:37:17.621759 master-0 kubenswrapper[38936]: I0216 21:37:17.621414 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-combined-ca-bundle\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.621759 master-0 kubenswrapper[38936]: I0216 21:37:17.621497 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8bxs\" (UniqueName: \"kubernetes.io/projected/b54cd98b-c06e-486b-951e-93d5c8477416-kube-api-access-v8bxs\") pod \"ironic-db-create-whl9t\" (UID: \"b54cd98b-c06e-486b-951e-93d5c8477416\") " pod="openstack/ironic-db-create-whl9t" Feb 16 21:37:17.621759 master-0 kubenswrapper[38936]: I0216 21:37:17.621559 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b83769-f6e4-4403-aada-d460148bb289-operator-scripts\") pod \"ironic-09d0-account-create-update-js9dq\" (UID: \"00b83769-f6e4-4403-aada-d460148bb289\") " pod="openstack/ironic-09d0-account-create-update-js9dq" Feb 16 21:37:17.623964 master-0 kubenswrapper[38936]: I0216 21:37:17.622763 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54cd98b-c06e-486b-951e-93d5c8477416-operator-scripts\") pod \"ironic-db-create-whl9t\" (UID: \"b54cd98b-c06e-486b-951e-93d5c8477416\") " pod="openstack/ironic-db-create-whl9t" Feb 16 21:37:17.623964 master-0 kubenswrapper[38936]: I0216 21:37:17.622847 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-config-data\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.623964 master-0 kubenswrapper[38936]: I0216 21:37:17.622927 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-combined-ca-bundle\") pod \"neutron-db-sync-znszx\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:17.623964 master-0 kubenswrapper[38936]: I0216 21:37:17.622965 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-db-sync-config-data\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.623964 master-0 kubenswrapper[38936]: I0216 21:37:17.623009 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptd55\" (UniqueName: \"kubernetes.io/projected/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-kube-api-access-ptd55\") pod \"neutron-db-sync-znszx\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:17.623964 master-0 kubenswrapper[38936]: I0216 21:37:17.623075 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a7c56a35-a711-40a4-9428-031faf014af4-etc-machine-id\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.623964 master-0 kubenswrapper[38936]: I0216 21:37:17.623097 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx79d\" (UniqueName: \"kubernetes.io/projected/00b83769-f6e4-4403-aada-d460148bb289-kube-api-access-kx79d\") pod \"ironic-09d0-account-create-update-js9dq\" (UID: \"00b83769-f6e4-4403-aada-d460148bb289\") " pod="openstack/ironic-09d0-account-create-update-js9dq" Feb 16 21:37:17.623964 master-0 kubenswrapper[38936]: I0216 21:37:17.623178 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-config\") pod \"neutron-db-sync-znszx\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:17.623964 master-0 kubenswrapper[38936]: I0216 21:37:17.623213 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvrnj\" (UniqueName: \"kubernetes.io/projected/a7c56a35-a711-40a4-9428-031faf014af4-kube-api-access-lvrnj\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.623964 master-0 kubenswrapper[38936]: I0216 21:37:17.623278 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-scripts\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.625044 master-0 kubenswrapper[38936]: I0216 21:37:17.624099 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54cd98b-c06e-486b-951e-93d5c8477416-operator-scripts\") pod \"ironic-db-create-whl9t\" (UID: \"b54cd98b-c06e-486b-951e-93d5c8477416\") " pod="openstack/ironic-db-create-whl9t" Feb 16 21:37:17.625430 master-0 kubenswrapper[38936]: I0216 21:37:17.625169 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b83769-f6e4-4403-aada-d460148bb289-operator-scripts\") pod \"ironic-09d0-account-create-update-js9dq\" (UID: \"00b83769-f6e4-4403-aada-d460148bb289\") " pod="openstack/ironic-09d0-account-create-update-js9dq" Feb 16 21:37:17.633830 master-0 kubenswrapper[38936]: I0216 21:37:17.633787 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-config\") pod \"neutron-db-sync-znszx\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:17.667920 master-0 kubenswrapper[38936]: I0216 21:37:17.649823 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7xpzq"] Feb 16 21:37:17.667920 master-0 kubenswrapper[38936]: I0216 21:37:17.661111 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx79d\" (UniqueName: \"kubernetes.io/projected/00b83769-f6e4-4403-aada-d460148bb289-kube-api-access-kx79d\") pod \"ironic-09d0-account-create-update-js9dq\" (UID: \"00b83769-f6e4-4403-aada-d460148bb289\") " pod="openstack/ironic-09d0-account-create-update-js9dq" Feb 16 21:37:17.672766 master-0 kubenswrapper[38936]: I0216 21:37:17.670349 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8bxs\" (UniqueName: \"kubernetes.io/projected/b54cd98b-c06e-486b-951e-93d5c8477416-kube-api-access-v8bxs\") pod \"ironic-db-create-whl9t\" (UID: \"b54cd98b-c06e-486b-951e-93d5c8477416\") " pod="openstack/ironic-db-create-whl9t" Feb 16 21:37:17.686127 master-0 kubenswrapper[38936]: I0216 21:37:17.686081 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptd55\" (UniqueName: \"kubernetes.io/projected/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-kube-api-access-ptd55\") pod \"neutron-db-sync-znszx\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:17.687605 master-0 kubenswrapper[38936]: I0216 21:37:17.687187 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-combined-ca-bundle\") pod \"neutron-db-sync-znszx\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:17.696963 master-0 kubenswrapper[38936]: I0216 21:37:17.696831 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77dfb8866c-gv2qv"] Feb 16 21:37:17.719812 master-0 kubenswrapper[38936]: I0216 21:37:17.719029 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78bc59585f-clvzn"] Feb 16 21:37:17.723795 master-0 kubenswrapper[38936]: I0216 21:37:17.721682 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.725077 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca578f4-3095-4770-9cdf-5702cdf8540b-logs\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.725139 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-scripts\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.725304 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-config-data\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.725388 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-combined-ca-bundle\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.725586 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkqq4\" (UniqueName: \"kubernetes.io/projected/eca578f4-3095-4770-9cdf-5702cdf8540b-kube-api-access-nkqq4\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.725636 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-config-data\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.725733 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-db-sync-config-data\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.725836 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a7c56a35-a711-40a4-9428-031faf014af4-etc-machine-id\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.725958 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-combined-ca-bundle\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.726019 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-scripts\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.726050 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvrnj\" (UniqueName: \"kubernetes.io/projected/a7c56a35-a711-40a4-9428-031faf014af4-kube-api-access-lvrnj\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.732397 master-0 kubenswrapper[38936]: I0216 21:37:17.726684 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a7c56a35-a711-40a4-9428-031faf014af4-etc-machine-id\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.733157 master-0 kubenswrapper[38936]: I0216 21:37:17.732940 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-config-data\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.750706 master-0 kubenswrapper[38936]: I0216 21:37:17.734716 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-combined-ca-bundle\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.750706 master-0 kubenswrapper[38936]: I0216 21:37:17.735187 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-scripts\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.750706 master-0 kubenswrapper[38936]: I0216 21:37:17.748184 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvrnj\" (UniqueName: \"kubernetes.io/projected/a7c56a35-a711-40a4-9428-031faf014af4-kube-api-access-lvrnj\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.776970 master-0 kubenswrapper[38936]: I0216 21:37:17.768573 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-db-sync-config-data\") pod \"cinder-9c692-db-sync-r9pqq\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.797754 master-0 kubenswrapper[38936]: I0216 21:37:17.796927 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78bc59585f-clvzn"] Feb 16 21:37:17.830602 master-0 kubenswrapper[38936]: I0216 21:37:17.830485 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca578f4-3095-4770-9cdf-5702cdf8540b-logs\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.830602 master-0 kubenswrapper[38936]: I0216 21:37:17.830560 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwdlz\" (UniqueName: \"kubernetes.io/projected/2de78a17-9736-4d1a-bd15-d021bf007026-kube-api-access-xwdlz\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.830817 master-0 kubenswrapper[38936]: I0216 21:37:17.830612 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-sb\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.830817 master-0 kubenswrapper[38936]: I0216 21:37:17.830674 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-config-data\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.830817 master-0 kubenswrapper[38936]: I0216 21:37:17.830773 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkqq4\" (UniqueName: \"kubernetes.io/projected/eca578f4-3095-4770-9cdf-5702cdf8540b-kube-api-access-nkqq4\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.830960 master-0 kubenswrapper[38936]: I0216 21:37:17.830860 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-svc\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.830960 master-0 kubenswrapper[38936]: I0216 21:37:17.830902 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-swift-storage-0\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.830960 master-0 kubenswrapper[38936]: I0216 21:37:17.830939 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-config\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.831095 master-0 kubenswrapper[38936]: I0216 21:37:17.831042 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-combined-ca-bundle\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.831095 master-0 kubenswrapper[38936]: I0216 21:37:17.831075 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-nb\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.831204 master-0 kubenswrapper[38936]: I0216 21:37:17.831128 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-scripts\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.838693 master-0 kubenswrapper[38936]: I0216 21:37:17.832294 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca578f4-3095-4770-9cdf-5702cdf8540b-logs\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.838693 master-0 kubenswrapper[38936]: I0216 21:37:17.834760 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-scripts\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.838693 master-0 kubenswrapper[38936]: I0216 21:37:17.837551 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-combined-ca-bundle\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.838693 master-0 kubenswrapper[38936]: I0216 21:37:17.838070 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-config-data\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.858258 master-0 kubenswrapper[38936]: I0216 21:37:17.854431 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkqq4\" (UniqueName: \"kubernetes.io/projected/eca578f4-3095-4770-9cdf-5702cdf8540b-kube-api-access-nkqq4\") pod \"placement-db-sync-7xpzq\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:17.875785 master-0 kubenswrapper[38936]: I0216 21:37:17.875154 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-whl9t" Feb 16 21:37:17.897102 master-0 kubenswrapper[38936]: I0216 21:37:17.895342 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:17.912506 master-0 kubenswrapper[38936]: I0216 21:37:17.912330 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-09d0-account-create-update-js9dq" Feb 16 21:37:17.938565 master-0 kubenswrapper[38936]: I0216 21:37:17.927395 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:17.938565 master-0 kubenswrapper[38936]: I0216 21:37:17.929296 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:17.941522 master-0 kubenswrapper[38936]: I0216 21:37:17.940393 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-svc\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.941522 master-0 kubenswrapper[38936]: I0216 21:37:17.940486 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-swift-storage-0\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.941522 master-0 kubenswrapper[38936]: I0216 21:37:17.940538 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-config\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.941522 master-0 kubenswrapper[38936]: I0216 21:37:17.940595 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-nb\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.942115 master-0 kubenswrapper[38936]: I0216 21:37:17.942075 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-swift-storage-0\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.950291 master-0 kubenswrapper[38936]: I0216 21:37:17.950236 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwdlz\" (UniqueName: \"kubernetes.io/projected/2de78a17-9736-4d1a-bd15-d021bf007026-kube-api-access-xwdlz\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.950520 master-0 kubenswrapper[38936]: I0216 21:37:17.950341 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-sb\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.951403 master-0 kubenswrapper[38936]: I0216 21:37:17.951372 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-config\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.951636 master-0 kubenswrapper[38936]: I0216 21:37:17.951608 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-sb\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.952212 master-0 kubenswrapper[38936]: I0216 21:37:17.952181 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-svc\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.952714 master-0 kubenswrapper[38936]: I0216 21:37:17.952683 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-nb\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.993297 master-0 kubenswrapper[38936]: I0216 21:37:17.993248 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwdlz\" (UniqueName: \"kubernetes.io/projected/2de78a17-9736-4d1a-bd15-d021bf007026-kube-api-access-xwdlz\") pod \"dnsmasq-dns-78bc59585f-clvzn\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:17.993737 master-0 kubenswrapper[38936]: I0216 21:37:17.993617 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:18.051549 master-0 kubenswrapper[38936]: I0216 21:37:18.050422 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:18.051753 master-0 kubenswrapper[38936]: I0216 21:37:18.051703 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-swift-storage-0\") pod \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " Feb 16 21:37:18.051818 master-0 kubenswrapper[38936]: I0216 21:37:18.051769 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-svc\") pod \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " Feb 16 21:37:18.052867 master-0 kubenswrapper[38936]: I0216 21:37:18.051882 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-sb\") pod \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " Feb 16 21:37:18.052867 master-0 kubenswrapper[38936]: I0216 21:37:18.052066 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-config\") pod \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " Feb 16 21:37:18.052867 master-0 kubenswrapper[38936]: I0216 21:37:18.052107 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r82nz\" (UniqueName: \"kubernetes.io/projected/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-kube-api-access-r82nz\") pod \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " Feb 16 21:37:18.052867 master-0 kubenswrapper[38936]: I0216 21:37:18.052176 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-nb\") pod \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\" (UID: \"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b\") " Feb 16 21:37:18.056789 master-0 kubenswrapper[38936]: I0216 21:37:18.056417 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-kube-api-access-r82nz" (OuterVolumeSpecName: "kube-api-access-r82nz") pod "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" (UID: "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b"). InnerVolumeSpecName "kube-api-access-r82nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:18.125994 master-0 kubenswrapper[38936]: I0216 21:37:18.123398 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xxk4w"] Feb 16 21:37:18.144564 master-0 kubenswrapper[38936]: I0216 21:37:18.143769 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" (UID: "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:18.157771 master-0 kubenswrapper[38936]: I0216 21:37:18.157723 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:18.157771 master-0 kubenswrapper[38936]: I0216 21:37:18.157768 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r82nz\" (UniqueName: \"kubernetes.io/projected/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-kube-api-access-r82nz\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:18.179821 master-0 kubenswrapper[38936]: I0216 21:37:18.179296 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-config" (OuterVolumeSpecName: "config") pod "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" (UID: "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:18.192361 master-0 kubenswrapper[38936]: I0216 21:37:18.190145 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" (UID: "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:18.200035 master-0 kubenswrapper[38936]: I0216 21:37:18.199819 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" (UID: "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:18.260092 master-0 kubenswrapper[38936]: I0216 21:37:18.260037 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:18.260092 master-0 kubenswrapper[38936]: I0216 21:37:18.260086 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:18.260092 master-0 kubenswrapper[38936]: I0216 21:37:18.260099 38936 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:18.269204 master-0 kubenswrapper[38936]: I0216 21:37:18.267322 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" (UID: "d2af8c69-6de4-46fb-a872-e2ae1ad49f8b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:18.364080 master-0 kubenswrapper[38936]: I0216 21:37:18.363196 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:18.609866 master-0 kubenswrapper[38936]: I0216 21:37:18.609807 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" event={"ID":"d2af8c69-6de4-46fb-a872-e2ae1ad49f8b","Type":"ContainerDied","Data":"000ffcb42cd563c3dae770d85a3a3e89ea1ca23b42af8231eab4e4bc4c4ea584"} Feb 16 21:37:18.610026 master-0 kubenswrapper[38936]: I0216 21:37:18.609872 38936 scope.go:117] "RemoveContainer" containerID="77295fe064343a18f81f0ab656b1c0c91b10f17176bd4d1a7cd3a31ddc276f1a" Feb 16 21:37:18.610107 master-0 kubenswrapper[38936]: I0216 21:37:18.610087 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb89595f5-b5ncl" Feb 16 21:37:18.613502 master-0 kubenswrapper[38936]: I0216 21:37:18.613459 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77dfb8866c-gv2qv"] Feb 16 21:37:18.616480 master-0 kubenswrapper[38936]: I0216 21:37:18.616428 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xxk4w" event={"ID":"432e676b-bab7-4991-8605-0eed0c6a0b2c","Type":"ContainerStarted","Data":"00a4fc3bbf18bc3cb9537dd8d4ec9038d4f790fa7ba1dea9e33affee71ef2a28"} Feb 16 21:37:18.616809 master-0 kubenswrapper[38936]: I0216 21:37:18.616494 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xxk4w" event={"ID":"432e676b-bab7-4991-8605-0eed0c6a0b2c","Type":"ContainerStarted","Data":"cbd535731474689f65c86c54ac7d517e49df5ea47d09bd8efd076347a66352e3"} Feb 16 21:37:18.729377 master-0 kubenswrapper[38936]: I0216 21:37:18.726830 38936 scope.go:117] "RemoveContainer" containerID="09880d4698f98cc84198ccc4b5923a4a260180e7ab9f0b56da8ba5b832a4e029" Feb 16 21:37:18.740473 master-0 kubenswrapper[38936]: I0216 21:37:18.738406 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-whl9t"] Feb 16 21:37:18.754904 master-0 kubenswrapper[38936]: I0216 21:37:18.754104 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-znszx"] Feb 16 21:37:18.762620 master-0 kubenswrapper[38936]: W0216 21:37:18.761489 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb54cd98b_c06e_486b_951e_93d5c8477416.slice/crio-1906778febef598618b35ffa3a4c4cb0c0bb176bcb855a6d062b84e4ca197e22 WatchSource:0}: Error finding container 1906778febef598618b35ffa3a4c4cb0c0bb176bcb855a6d062b84e4ca197e22: Status 404 returned error can't find the container with id 1906778febef598618b35ffa3a4c4cb0c0bb176bcb855a6d062b84e4ca197e22 Feb 16 21:37:18.767374 master-0 kubenswrapper[38936]: I0216 21:37:18.766899 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb89595f5-b5ncl"] Feb 16 21:37:18.781883 master-0 kubenswrapper[38936]: I0216 21:37:18.781833 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb89595f5-b5ncl"] Feb 16 21:37:18.845857 master-0 kubenswrapper[38936]: I0216 21:37:18.842907 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-09d0-account-create-update-js9dq"] Feb 16 21:37:18.867446 master-0 kubenswrapper[38936]: I0216 21:37:18.867397 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7xpzq"] Feb 16 21:37:18.880711 master-0 kubenswrapper[38936]: W0216 21:37:18.880607 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b83769_f6e4_4403_aada_d460148bb289.slice/crio-660bb2daf410ca126c5f84cb52190a6603edd28ac98ced861ae1fac379ceda32 WatchSource:0}: Error finding container 660bb2daf410ca126c5f84cb52190a6603edd28ac98ced861ae1fac379ceda32: Status 404 returned error can't find the container with id 660bb2daf410ca126c5f84cb52190a6603edd28ac98ced861ae1fac379ceda32 Feb 16 21:37:18.909584 master-0 kubenswrapper[38936]: W0216 21:37:18.909274 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeca578f4_3095_4770_9cdf_5702cdf8540b.slice/crio-d47ee8cbdf7acec7e2797cac38a82ae59edf95b65b9d640590c9cd65c2f242bc WatchSource:0}: Error finding container d47ee8cbdf7acec7e2797cac38a82ae59edf95b65b9d640590c9cd65c2f242bc: Status 404 returned error can't find the container with id d47ee8cbdf7acec7e2797cac38a82ae59edf95b65b9d640590c9cd65c2f242bc Feb 16 21:37:19.127830 master-0 kubenswrapper[38936]: I0216 21:37:19.127775 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:37:19.128789 master-0 kubenswrapper[38936]: E0216 21:37:19.128763 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" containerName="dnsmasq-dns" Feb 16 21:37:19.128789 master-0 kubenswrapper[38936]: I0216 21:37:19.128785 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" containerName="dnsmasq-dns" Feb 16 21:37:19.128979 master-0 kubenswrapper[38936]: E0216 21:37:19.128816 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" containerName="init" Feb 16 21:37:19.128979 master-0 kubenswrapper[38936]: I0216 21:37:19.128824 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" containerName="init" Feb 16 21:37:19.129337 master-0 kubenswrapper[38936]: I0216 21:37:19.129312 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" containerName="dnsmasq-dns" Feb 16 21:37:19.131492 master-0 kubenswrapper[38936]: I0216 21:37:19.131465 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.136111 master-0 kubenswrapper[38936]: I0216 21:37:19.136046 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 21:37:19.136288 master-0 kubenswrapper[38936]: I0216 21:37:19.136103 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 21:37:19.136459 master-0 kubenswrapper[38936]: I0216 21:37:19.136430 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-1d7ec-default-internal-config-data" Feb 16 21:37:19.164752 master-0 kubenswrapper[38936]: I0216 21:37:19.144193 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:37:19.164752 master-0 kubenswrapper[38936]: I0216 21:37:19.161455 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78bc59585f-clvzn"] Feb 16 21:37:19.169673 master-0 kubenswrapper[38936]: I0216 21:37:19.167819 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-db-sync-r9pqq"] Feb 16 21:37:19.314521 master-0 kubenswrapper[38936]: I0216 21:37:19.310967 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-internal-tls-certs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.314521 master-0 kubenswrapper[38936]: I0216 21:37:19.311040 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7mc5\" (UniqueName: \"kubernetes.io/projected/52974d79-df13-4786-9815-8c0689c7b8a8-kube-api-access-r7mc5\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.314521 master-0 kubenswrapper[38936]: I0216 21:37:19.311076 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-httpd-run\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.314521 master-0 kubenswrapper[38936]: I0216 21:37:19.311144 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.314521 master-0 kubenswrapper[38936]: I0216 21:37:19.311170 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-scripts\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.314521 master-0 kubenswrapper[38936]: I0216 21:37:19.311227 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-config-data\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.314521 master-0 kubenswrapper[38936]: I0216 21:37:19.311263 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-combined-ca-bundle\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.314521 master-0 kubenswrapper[38936]: I0216 21:37:19.311303 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-logs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.414048 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.414112 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-scripts\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.414174 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-config-data\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.415163 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-combined-ca-bundle\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.415254 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-logs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.415360 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-internal-tls-certs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.415389 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7mc5\" (UniqueName: \"kubernetes.io/projected/52974d79-df13-4786-9815-8c0689c7b8a8-kube-api-access-r7mc5\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.415434 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-httpd-run\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.416037 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-httpd-run\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.417057 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-logs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.417553 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:37:19.417915 master-0 kubenswrapper[38936]: I0216 21:37:19.417577 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1b887ae194b0900377497cd58f52b4420ec6f7ec05c5eb1852be55020074fcad/globalmount\"" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.418663 master-0 kubenswrapper[38936]: I0216 21:37:19.418532 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-scripts\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.429873 master-0 kubenswrapper[38936]: I0216 21:37:19.428742 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-config-data\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.436149 master-0 kubenswrapper[38936]: I0216 21:37:19.436088 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-combined-ca-bundle\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.436424 master-0 kubenswrapper[38936]: I0216 21:37:19.436272 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-internal-tls-certs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.444914 master-0 kubenswrapper[38936]: I0216 21:37:19.444840 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7mc5\" (UniqueName: \"kubernetes.io/projected/52974d79-df13-4786-9815-8c0689c7b8a8-kube-api-access-r7mc5\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:19.662810 master-0 kubenswrapper[38936]: I0216 21:37:19.661486 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1d7ec-default-external-api-0"] Feb 16 21:37:19.664293 master-0 kubenswrapper[38936]: I0216 21:37:19.664235 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.667285 master-0 kubenswrapper[38936]: I0216 21:37:19.667256 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-1d7ec-default-external-config-data" Feb 16 21:37:19.667401 master-0 kubenswrapper[38936]: I0216 21:37:19.667340 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" event={"ID":"2de78a17-9736-4d1a-bd15-d021bf007026","Type":"ContainerStarted","Data":"6f3797b5486abbf6a114be52e91236a1f7182b5a56fd1bb1e9106cb0e2dcc0f3"} Feb 16 21:37:19.667680 master-0 kubenswrapper[38936]: I0216 21:37:19.667543 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 21:37:19.673440 master-0 kubenswrapper[38936]: I0216 21:37:19.673333 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d7ec-default-external-api-0"] Feb 16 21:37:19.700623 master-0 kubenswrapper[38936]: I0216 21:37:19.697773 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-znszx" event={"ID":"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3","Type":"ContainerStarted","Data":"8f1e32c71f9fe7c0f457db72104d2cdf117833a851a8986e227468c0679f9099"} Feb 16 21:37:19.700623 master-0 kubenswrapper[38936]: I0216 21:37:19.697841 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-znszx" event={"ID":"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3","Type":"ContainerStarted","Data":"c554b4ca14823822942e1efee12e1b10c29526b8263d670acdf67079c36beeb7"} Feb 16 21:37:19.712690 master-0 kubenswrapper[38936]: I0216 21:37:19.711054 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7xpzq" event={"ID":"eca578f4-3095-4770-9cdf-5702cdf8540b","Type":"ContainerStarted","Data":"d47ee8cbdf7acec7e2797cac38a82ae59edf95b65b9d640590c9cd65c2f242bc"} Feb 16 21:37:19.724987 master-0 kubenswrapper[38936]: I0216 21:37:19.724926 38936 generic.go:334] "Generic (PLEG): container finished" podID="b54cd98b-c06e-486b-951e-93d5c8477416" containerID="50071de4534addfdafc6f2ac36fa56a059648fc1a46c2b0d5b91601165f57fcc" exitCode=0 Feb 16 21:37:19.725325 master-0 kubenswrapper[38936]: I0216 21:37:19.725303 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-whl9t" event={"ID":"b54cd98b-c06e-486b-951e-93d5c8477416","Type":"ContainerDied","Data":"50071de4534addfdafc6f2ac36fa56a059648fc1a46c2b0d5b91601165f57fcc"} Feb 16 21:37:19.725424 master-0 kubenswrapper[38936]: I0216 21:37:19.725410 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-whl9t" event={"ID":"b54cd98b-c06e-486b-951e-93d5c8477416","Type":"ContainerStarted","Data":"1906778febef598618b35ffa3a4c4cb0c0bb176bcb855a6d062b84e4ca197e22"} Feb 16 21:37:19.727362 master-0 kubenswrapper[38936]: I0216 21:37:19.727343 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-db-sync-r9pqq" event={"ID":"a7c56a35-a711-40a4-9428-031faf014af4","Type":"ContainerStarted","Data":"a3e4ca9fe6da125f8f9f0a76148c6cd69efad98ef78b306871d6a256f2deb710"} Feb 16 21:37:19.755000 master-0 kubenswrapper[38936]: I0216 21:37:19.733272 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:37:19.755000 master-0 kubenswrapper[38936]: E0216 21:37:19.735611 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-1d7ec-default-internal-api-0" podUID="52974d79-df13-4786-9815-8c0689c7b8a8" Feb 16 21:37:19.818915 master-0 kubenswrapper[38936]: I0216 21:37:19.790199 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-09d0-account-create-update-js9dq" event={"ID":"00b83769-f6e4-4403-aada-d460148bb289","Type":"ContainerStarted","Data":"c4c041c957179ef05bffa6d57bb9b4a1a610a593b499ac4f975781f3549ff304"} Feb 16 21:37:19.818915 master-0 kubenswrapper[38936]: I0216 21:37:19.790277 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-09d0-account-create-update-js9dq" event={"ID":"00b83769-f6e4-4403-aada-d460148bb289","Type":"ContainerStarted","Data":"660bb2daf410ca126c5f84cb52190a6603edd28ac98ced861ae1fac379ceda32"} Feb 16 21:37:19.818915 master-0 kubenswrapper[38936]: I0216 21:37:19.791864 38936 generic.go:334] "Generic (PLEG): container finished" podID="8165ae6f-f1ef-400e-aa27-f0dff9ed5a15" containerID="ba202dfaca5253d16d3d44e33c377c68de0562c3e23001732ed2bfcdaa264caa" exitCode=0 Feb 16 21:37:19.818915 master-0 kubenswrapper[38936]: I0216 21:37:19.791988 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" event={"ID":"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15","Type":"ContainerDied","Data":"ba202dfaca5253d16d3d44e33c377c68de0562c3e23001732ed2bfcdaa264caa"} Feb 16 21:37:19.818915 master-0 kubenswrapper[38936]: I0216 21:37:19.792079 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" event={"ID":"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15","Type":"ContainerStarted","Data":"e7d406e8c24a3c382d780fea137419ebb90cae98159c452d4c83f863694f793d"} Feb 16 21:37:19.837758 master-0 kubenswrapper[38936]: I0216 21:37:19.837684 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bbf8\" (UniqueName: \"kubernetes.io/projected/8318eb20-824e-49c4-87b3-36784a1fc4db-kube-api-access-6bbf8\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.838032 master-0 kubenswrapper[38936]: I0216 21:37:19.837821 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-config-data\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.838032 master-0 kubenswrapper[38936]: I0216 21:37:19.837853 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.838032 master-0 kubenswrapper[38936]: I0216 21:37:19.837903 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-logs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.838032 master-0 kubenswrapper[38936]: I0216 21:37:19.837922 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-public-tls-certs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.838399 master-0 kubenswrapper[38936]: I0216 21:37:19.838351 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-httpd-run\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.838456 master-0 kubenswrapper[38936]: I0216 21:37:19.838440 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-scripts\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.840178 master-0 kubenswrapper[38936]: I0216 21:37:19.840140 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-combined-ca-bundle\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.902069 master-0 kubenswrapper[38936]: I0216 21:37:19.889258 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-znszx" podStartSLOduration=2.889230678 podStartE2EDuration="2.889230678s" podCreationTimestamp="2026-02-16 21:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:19.881248223 +0000 UTC m=+870.233251585" watchObservedRunningTime="2026-02-16 21:37:19.889230678 +0000 UTC m=+870.241234040" Feb 16 21:37:19.941381 master-0 kubenswrapper[38936]: I0216 21:37:19.941178 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xxk4w" podStartSLOduration=3.941154771 podStartE2EDuration="3.941154771s" podCreationTimestamp="2026-02-16 21:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:19.939376173 +0000 UTC m=+870.291379535" watchObservedRunningTime="2026-02-16 21:37:19.941154771 +0000 UTC m=+870.293158133" Feb 16 21:37:19.941592 master-0 kubenswrapper[38936]: I0216 21:37:19.941468 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-combined-ca-bundle\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.941592 master-0 kubenswrapper[38936]: I0216 21:37:19.941561 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbf8\" (UniqueName: \"kubernetes.io/projected/8318eb20-824e-49c4-87b3-36784a1fc4db-kube-api-access-6bbf8\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.941674 master-0 kubenswrapper[38936]: I0216 21:37:19.941611 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-config-data\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.942202 master-0 kubenswrapper[38936]: I0216 21:37:19.942179 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.942520 master-0 kubenswrapper[38936]: I0216 21:37:19.942504 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-logs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.948938 master-0 kubenswrapper[38936]: I0216 21:37:19.948896 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-public-tls-certs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.949059 master-0 kubenswrapper[38936]: I0216 21:37:19.949046 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-httpd-run\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.949174 master-0 kubenswrapper[38936]: I0216 21:37:19.949160 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-scripts\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.955754 master-0 kubenswrapper[38936]: I0216 21:37:19.947014 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-logs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.955754 master-0 kubenswrapper[38936]: I0216 21:37:19.947235 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2af8c69-6de4-46fb-a872-e2ae1ad49f8b" path="/var/lib/kubelet/pods/d2af8c69-6de4-46fb-a872-e2ae1ad49f8b/volumes" Feb 16 21:37:19.955754 master-0 kubenswrapper[38936]: I0216 21:37:19.953701 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-config-data\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.955754 master-0 kubenswrapper[38936]: I0216 21:37:19.955444 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:37:19.955754 master-0 kubenswrapper[38936]: I0216 21:37:19.955503 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1fe54bb4bfd47e48e9eb50fd4126dca5c82c0ada3f6db7cb95b93ce09b27a92c/globalmount\"" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.956086 master-0 kubenswrapper[38936]: I0216 21:37:19.955879 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-public-tls-certs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.959062 master-0 kubenswrapper[38936]: I0216 21:37:19.959035 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-httpd-run\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.964899 master-0 kubenswrapper[38936]: I0216 21:37:19.959976 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-combined-ca-bundle\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.969356 master-0 kubenswrapper[38936]: I0216 21:37:19.968022 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bbf8\" (UniqueName: \"kubernetes.io/projected/8318eb20-824e-49c4-87b3-36784a1fc4db-kube-api-access-6bbf8\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:19.976460 master-0 kubenswrapper[38936]: I0216 21:37:19.976348 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-scripts\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:20.410767 master-0 kubenswrapper[38936]: I0216 21:37:20.410728 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:20.493821 master-0 kubenswrapper[38936]: I0216 21:37:20.493757 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-config\") pod \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " Feb 16 21:37:20.493821 master-0 kubenswrapper[38936]: I0216 21:37:20.493807 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-svc\") pod \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " Feb 16 21:37:20.494094 master-0 kubenswrapper[38936]: I0216 21:37:20.493962 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5xjs\" (UniqueName: \"kubernetes.io/projected/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-kube-api-access-f5xjs\") pod \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " Feb 16 21:37:20.494177 master-0 kubenswrapper[38936]: I0216 21:37:20.494153 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-nb\") pod \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " Feb 16 21:37:20.494242 master-0 kubenswrapper[38936]: I0216 21:37:20.494218 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-swift-storage-0\") pod \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " Feb 16 21:37:20.494303 master-0 kubenswrapper[38936]: I0216 21:37:20.494287 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-sb\") pod \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\" (UID: \"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15\") " Feb 16 21:37:20.510757 master-0 kubenswrapper[38936]: I0216 21:37:20.509064 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-kube-api-access-f5xjs" (OuterVolumeSpecName: "kube-api-access-f5xjs") pod "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15" (UID: "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15"). InnerVolumeSpecName "kube-api-access-f5xjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:20.528790 master-0 kubenswrapper[38936]: I0216 21:37:20.527954 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-config" (OuterVolumeSpecName: "config") pod "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15" (UID: "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:20.529687 master-0 kubenswrapper[38936]: I0216 21:37:20.529659 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15" (UID: "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:20.532746 master-0 kubenswrapper[38936]: I0216 21:37:20.532388 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15" (UID: "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:20.541690 master-0 kubenswrapper[38936]: I0216 21:37:20.541600 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15" (UID: "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:20.561686 master-0 kubenswrapper[38936]: I0216 21:37:20.561532 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15" (UID: "8165ae6f-f1ef-400e-aa27-f0dff9ed5a15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:20.600164 master-0 kubenswrapper[38936]: I0216 21:37:20.600085 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:20.600164 master-0 kubenswrapper[38936]: I0216 21:37:20.600146 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:20.600164 master-0 kubenswrapper[38936]: I0216 21:37:20.600162 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5xjs\" (UniqueName: \"kubernetes.io/projected/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-kube-api-access-f5xjs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:20.600741 master-0 kubenswrapper[38936]: I0216 21:37:20.600180 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:20.600741 master-0 kubenswrapper[38936]: I0216 21:37:20.600199 38936 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:20.600741 master-0 kubenswrapper[38936]: I0216 21:37:20.600213 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:20.851762 master-0 kubenswrapper[38936]: I0216 21:37:20.847114 38936 generic.go:334] "Generic (PLEG): container finished" podID="00b83769-f6e4-4403-aada-d460148bb289" containerID="c4c041c957179ef05bffa6d57bb9b4a1a610a593b499ac4f975781f3549ff304" exitCode=0 Feb 16 21:37:20.851762 master-0 kubenswrapper[38936]: I0216 21:37:20.847186 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-09d0-account-create-update-js9dq" event={"ID":"00b83769-f6e4-4403-aada-d460148bb289","Type":"ContainerDied","Data":"c4c041c957179ef05bffa6d57bb9b4a1a610a593b499ac4f975781f3549ff304"} Feb 16 21:37:20.894807 master-0 kubenswrapper[38936]: I0216 21:37:20.894746 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" Feb 16 21:37:20.895207 master-0 kubenswrapper[38936]: I0216 21:37:20.894977 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77dfb8866c-gv2qv" event={"ID":"8165ae6f-f1ef-400e-aa27-f0dff9ed5a15","Type":"ContainerDied","Data":"e7d406e8c24a3c382d780fea137419ebb90cae98159c452d4c83f863694f793d"} Feb 16 21:37:20.895207 master-0 kubenswrapper[38936]: I0216 21:37:20.895052 38936 scope.go:117] "RemoveContainer" containerID="ba202dfaca5253d16d3d44e33c377c68de0562c3e23001732ed2bfcdaa264caa" Feb 16 21:37:20.900699 master-0 kubenswrapper[38936]: I0216 21:37:20.900488 38936 generic.go:334] "Generic (PLEG): container finished" podID="2de78a17-9736-4d1a-bd15-d021bf007026" containerID="cb57f596593bc0aa7163e8d0c523a09ebfd69008c7569d4d99046feddc1deb40" exitCode=0 Feb 16 21:37:20.903576 master-0 kubenswrapper[38936]: I0216 21:37:20.903459 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" event={"ID":"2de78a17-9736-4d1a-bd15-d021bf007026","Type":"ContainerDied","Data":"cb57f596593bc0aa7163e8d0c523a09ebfd69008c7569d4d99046feddc1deb40"} Feb 16 21:37:20.906809 master-0 kubenswrapper[38936]: I0216 21:37:20.906687 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:20.935585 master-0 kubenswrapper[38936]: I0216 21:37:20.935538 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:21.046680 master-0 kubenswrapper[38936]: I0216 21:37:21.042746 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-combined-ca-bundle\") pod \"52974d79-df13-4786-9815-8c0689c7b8a8\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " Feb 16 21:37:21.046680 master-0 kubenswrapper[38936]: I0216 21:37:21.042842 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-logs\") pod \"52974d79-df13-4786-9815-8c0689c7b8a8\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " Feb 16 21:37:21.046680 master-0 kubenswrapper[38936]: I0216 21:37:21.043063 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-internal-tls-certs\") pod \"52974d79-df13-4786-9815-8c0689c7b8a8\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " Feb 16 21:37:21.046680 master-0 kubenswrapper[38936]: I0216 21:37:21.043103 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7mc5\" (UniqueName: \"kubernetes.io/projected/52974d79-df13-4786-9815-8c0689c7b8a8-kube-api-access-r7mc5\") pod \"52974d79-df13-4786-9815-8c0689c7b8a8\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " Feb 16 21:37:21.046680 master-0 kubenswrapper[38936]: I0216 21:37:21.043164 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-config-data\") pod \"52974d79-df13-4786-9815-8c0689c7b8a8\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " Feb 16 21:37:21.046680 master-0 kubenswrapper[38936]: I0216 21:37:21.043231 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-httpd-run\") pod \"52974d79-df13-4786-9815-8c0689c7b8a8\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " Feb 16 21:37:21.046680 master-0 kubenswrapper[38936]: I0216 21:37:21.046227 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "52974d79-df13-4786-9815-8c0689c7b8a8" (UID: "52974d79-df13-4786-9815-8c0689c7b8a8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:37:21.051568 master-0 kubenswrapper[38936]: I0216 21:37:21.048926 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-logs" (OuterVolumeSpecName: "logs") pod "52974d79-df13-4786-9815-8c0689c7b8a8" (UID: "52974d79-df13-4786-9815-8c0689c7b8a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:37:21.058886 master-0 kubenswrapper[38936]: I0216 21:37:21.056063 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52974d79-df13-4786-9815-8c0689c7b8a8" (UID: "52974d79-df13-4786-9815-8c0689c7b8a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:21.062261 master-0 kubenswrapper[38936]: I0216 21:37:21.062187 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77dfb8866c-gv2qv"] Feb 16 21:37:21.069993 master-0 kubenswrapper[38936]: I0216 21:37:21.066638 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-config-data" (OuterVolumeSpecName: "config-data") pod "52974d79-df13-4786-9815-8c0689c7b8a8" (UID: "52974d79-df13-4786-9815-8c0689c7b8a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:21.087509 master-0 kubenswrapper[38936]: I0216 21:37:21.075260 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52974d79-df13-4786-9815-8c0689c7b8a8-kube-api-access-r7mc5" (OuterVolumeSpecName: "kube-api-access-r7mc5") pod "52974d79-df13-4786-9815-8c0689c7b8a8" (UID: "52974d79-df13-4786-9815-8c0689c7b8a8"). InnerVolumeSpecName "kube-api-access-r7mc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:21.087509 master-0 kubenswrapper[38936]: I0216 21:37:21.079521 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77dfb8866c-gv2qv"] Feb 16 21:37:21.100080 master-0 kubenswrapper[38936]: I0216 21:37:21.098861 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "52974d79-df13-4786-9815-8c0689c7b8a8" (UID: "52974d79-df13-4786-9815-8c0689c7b8a8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:21.119932 master-0 kubenswrapper[38936]: I0216 21:37:21.119750 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:21.154325 master-0 kubenswrapper[38936]: I0216 21:37:21.154251 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-scripts\") pod \"52974d79-df13-4786-9815-8c0689c7b8a8\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " Feb 16 21:37:21.154873 master-0 kubenswrapper[38936]: I0216 21:37:21.154808 38936 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:21.154873 master-0 kubenswrapper[38936]: I0216 21:37:21.154830 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7mc5\" (UniqueName: \"kubernetes.io/projected/52974d79-df13-4786-9815-8c0689c7b8a8-kube-api-access-r7mc5\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:21.154873 master-0 kubenswrapper[38936]: I0216 21:37:21.154844 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:21.154873 master-0 kubenswrapper[38936]: I0216 21:37:21.154853 38936 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:21.154873 master-0 kubenswrapper[38936]: I0216 21:37:21.154863 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:21.154873 master-0 kubenswrapper[38936]: I0216 21:37:21.154872 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52974d79-df13-4786-9815-8c0689c7b8a8-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:21.159348 master-0 kubenswrapper[38936]: I0216 21:37:21.159200 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-scripts" (OuterVolumeSpecName: "scripts") pod "52974d79-df13-4786-9815-8c0689c7b8a8" (UID: "52974d79-df13-4786-9815-8c0689c7b8a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:21.256766 master-0 kubenswrapper[38936]: I0216 21:37:21.256628 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"52974d79-df13-4786-9815-8c0689c7b8a8\" (UID: \"52974d79-df13-4786-9815-8c0689c7b8a8\") " Feb 16 21:37:21.262741 master-0 kubenswrapper[38936]: I0216 21:37:21.257773 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52974d79-df13-4786-9815-8c0689c7b8a8-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:21.633670 master-0 kubenswrapper[38936]: I0216 21:37:21.633253 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-09d0-account-create-update-js9dq" Feb 16 21:37:21.744674 master-0 kubenswrapper[38936]: I0216 21:37:21.741427 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-whl9t" Feb 16 21:37:21.775670 master-0 kubenswrapper[38936]: I0216 21:37:21.771456 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx79d\" (UniqueName: \"kubernetes.io/projected/00b83769-f6e4-4403-aada-d460148bb289-kube-api-access-kx79d\") pod \"00b83769-f6e4-4403-aada-d460148bb289\" (UID: \"00b83769-f6e4-4403-aada-d460148bb289\") " Feb 16 21:37:21.775670 master-0 kubenswrapper[38936]: I0216 21:37:21.771571 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b83769-f6e4-4403-aada-d460148bb289-operator-scripts\") pod \"00b83769-f6e4-4403-aada-d460148bb289\" (UID: \"00b83769-f6e4-4403-aada-d460148bb289\") " Feb 16 21:37:21.775670 master-0 kubenswrapper[38936]: I0216 21:37:21.773676 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00b83769-f6e4-4403-aada-d460148bb289-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00b83769-f6e4-4403-aada-d460148bb289" (UID: "00b83769-f6e4-4403-aada-d460148bb289"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:21.786671 master-0 kubenswrapper[38936]: I0216 21:37:21.777523 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00b83769-f6e4-4403-aada-d460148bb289-kube-api-access-kx79d" (OuterVolumeSpecName: "kube-api-access-kx79d") pod "00b83769-f6e4-4403-aada-d460148bb289" (UID: "00b83769-f6e4-4403-aada-d460148bb289"). InnerVolumeSpecName "kube-api-access-kx79d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:21.885833 master-0 kubenswrapper[38936]: I0216 21:37:21.875409 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54cd98b-c06e-486b-951e-93d5c8477416-operator-scripts\") pod \"b54cd98b-c06e-486b-951e-93d5c8477416\" (UID: \"b54cd98b-c06e-486b-951e-93d5c8477416\") " Feb 16 21:37:21.885833 master-0 kubenswrapper[38936]: I0216 21:37:21.876155 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b54cd98b-c06e-486b-951e-93d5c8477416-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b54cd98b-c06e-486b-951e-93d5c8477416" (UID: "b54cd98b-c06e-486b-951e-93d5c8477416"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:21.885833 master-0 kubenswrapper[38936]: I0216 21:37:21.876933 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8bxs\" (UniqueName: \"kubernetes.io/projected/b54cd98b-c06e-486b-951e-93d5c8477416-kube-api-access-v8bxs\") pod \"b54cd98b-c06e-486b-951e-93d5c8477416\" (UID: \"b54cd98b-c06e-486b-951e-93d5c8477416\") " Feb 16 21:37:21.885833 master-0 kubenswrapper[38936]: I0216 21:37:21.878291 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx79d\" (UniqueName: \"kubernetes.io/projected/00b83769-f6e4-4403-aada-d460148bb289-kube-api-access-kx79d\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:21.885833 master-0 kubenswrapper[38936]: I0216 21:37:21.878327 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00b83769-f6e4-4403-aada-d460148bb289-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:21.885833 master-0 kubenswrapper[38936]: I0216 21:37:21.878446 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b54cd98b-c06e-486b-951e-93d5c8477416-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:21.891852 master-0 kubenswrapper[38936]: I0216 21:37:21.889035 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b54cd98b-c06e-486b-951e-93d5c8477416-kube-api-access-v8bxs" (OuterVolumeSpecName: "kube-api-access-v8bxs") pod "b54cd98b-c06e-486b-951e-93d5c8477416" (UID: "b54cd98b-c06e-486b-951e-93d5c8477416"). InnerVolumeSpecName "kube-api-access-v8bxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:21.913681 master-0 kubenswrapper[38936]: I0216 21:37:21.913481 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8165ae6f-f1ef-400e-aa27-f0dff9ed5a15" path="/var/lib/kubelet/pods/8165ae6f-f1ef-400e-aa27-f0dff9ed5a15/volumes" Feb 16 21:37:21.918531 master-0 kubenswrapper[38936]: I0216 21:37:21.918496 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-09d0-account-create-update-js9dq" Feb 16 21:37:21.929423 master-0 kubenswrapper[38936]: I0216 21:37:21.929375 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-09d0-account-create-update-js9dq" event={"ID":"00b83769-f6e4-4403-aada-d460148bb289","Type":"ContainerDied","Data":"660bb2daf410ca126c5f84cb52190a6603edd28ac98ced861ae1fac379ceda32"} Feb 16 21:37:21.929639 master-0 kubenswrapper[38936]: I0216 21:37:21.929624 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="660bb2daf410ca126c5f84cb52190a6603edd28ac98ced861ae1fac379ceda32" Feb 16 21:37:21.929730 master-0 kubenswrapper[38936]: I0216 21:37:21.929716 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" event={"ID":"2de78a17-9736-4d1a-bd15-d021bf007026","Type":"ContainerStarted","Data":"76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1"} Feb 16 21:37:21.929823 master-0 kubenswrapper[38936]: I0216 21:37:21.929811 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:21.933698 master-0 kubenswrapper[38936]: I0216 21:37:21.933616 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:21.934767 master-0 kubenswrapper[38936]: I0216 21:37:21.934682 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-whl9t" Feb 16 21:37:21.935501 master-0 kubenswrapper[38936]: I0216 21:37:21.935447 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-whl9t" event={"ID":"b54cd98b-c06e-486b-951e-93d5c8477416","Type":"ContainerDied","Data":"1906778febef598618b35ffa3a4c4cb0c0bb176bcb855a6d062b84e4ca197e22"} Feb 16 21:37:21.935553 master-0 kubenswrapper[38936]: I0216 21:37:21.935505 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1906778febef598618b35ffa3a4c4cb0c0bb176bcb855a6d062b84e4ca197e22" Feb 16 21:37:21.975454 master-0 kubenswrapper[38936]: I0216 21:37:21.975331 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" podStartSLOduration=4.975302728 podStartE2EDuration="4.975302728s" podCreationTimestamp="2026-02-16 21:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:21.958194656 +0000 UTC m=+872.310198028" watchObservedRunningTime="2026-02-16 21:37:21.975302728 +0000 UTC m=+872.327306090" Feb 16 21:37:21.986828 master-0 kubenswrapper[38936]: I0216 21:37:21.986770 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8bxs\" (UniqueName: \"kubernetes.io/projected/b54cd98b-c06e-486b-951e-93d5c8477416-kube-api-access-v8bxs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:22.422946 master-0 kubenswrapper[38936]: I0216 21:37:22.422878 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6" (OuterVolumeSpecName: "glance") pod "52974d79-df13-4786-9815-8c0689c7b8a8" (UID: "52974d79-df13-4786-9815-8c0689c7b8a8"). InnerVolumeSpecName "pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:37:22.433441 master-0 kubenswrapper[38936]: I0216 21:37:22.432708 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:22.504536 master-0 kubenswrapper[38936]: I0216 21:37:22.504465 38936 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") on node \"master-0\" " Feb 16 21:37:22.549429 master-0 kubenswrapper[38936]: I0216 21:37:22.549377 38936 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:37:22.549727 master-0 kubenswrapper[38936]: I0216 21:37:22.549642 38936 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3" (UniqueName: "kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6") on node "master-0" Feb 16 21:37:22.609003 master-0 kubenswrapper[38936]: I0216 21:37:22.607406 38936 reconciler_common.go:293] "Volume detached for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:22.617067 master-0 kubenswrapper[38936]: I0216 21:37:22.614980 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:37:22.638006 master-0 kubenswrapper[38936]: I0216 21:37:22.635472 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:37:22.661042 master-0 kubenswrapper[38936]: I0216 21:37:22.657949 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:22.677751 master-0 kubenswrapper[38936]: I0216 21:37:22.677563 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:37:22.678621 master-0 kubenswrapper[38936]: E0216 21:37:22.678384 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8165ae6f-f1ef-400e-aa27-f0dff9ed5a15" containerName="init" Feb 16 21:37:22.678621 master-0 kubenswrapper[38936]: I0216 21:37:22.678552 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="8165ae6f-f1ef-400e-aa27-f0dff9ed5a15" containerName="init" Feb 16 21:37:22.678621 master-0 kubenswrapper[38936]: E0216 21:37:22.678614 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54cd98b-c06e-486b-951e-93d5c8477416" containerName="mariadb-database-create" Feb 16 21:37:22.678765 master-0 kubenswrapper[38936]: I0216 21:37:22.678623 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54cd98b-c06e-486b-951e-93d5c8477416" containerName="mariadb-database-create" Feb 16 21:37:22.678765 master-0 kubenswrapper[38936]: E0216 21:37:22.678638 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00b83769-f6e4-4403-aada-d460148bb289" containerName="mariadb-account-create-update" Feb 16 21:37:22.678765 master-0 kubenswrapper[38936]: I0216 21:37:22.678665 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="00b83769-f6e4-4403-aada-d460148bb289" containerName="mariadb-account-create-update" Feb 16 21:37:22.679068 master-0 kubenswrapper[38936]: I0216 21:37:22.679044 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="00b83769-f6e4-4403-aada-d460148bb289" containerName="mariadb-account-create-update" Feb 16 21:37:22.679108 master-0 kubenswrapper[38936]: I0216 21:37:22.679069 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b54cd98b-c06e-486b-951e-93d5c8477416" containerName="mariadb-database-create" Feb 16 21:37:22.679146 master-0 kubenswrapper[38936]: I0216 21:37:22.679108 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="8165ae6f-f1ef-400e-aa27-f0dff9ed5a15" containerName="init" Feb 16 21:37:22.680835 master-0 kubenswrapper[38936]: I0216 21:37:22.680804 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.683548 master-0 kubenswrapper[38936]: I0216 21:37:22.683447 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-1d7ec-default-internal-config-data" Feb 16 21:37:22.685821 master-0 kubenswrapper[38936]: I0216 21:37:22.685790 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 21:37:22.719250 master-0 kubenswrapper[38936]: I0216 21:37:22.715392 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:37:22.822270 master-0 kubenswrapper[38936]: I0216 21:37:22.822189 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-internal-tls-certs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.822270 master-0 kubenswrapper[38936]: I0216 21:37:22.822284 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-scripts\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.822609 master-0 kubenswrapper[38936]: I0216 21:37:22.822314 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-combined-ca-bundle\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.822609 master-0 kubenswrapper[38936]: I0216 21:37:22.822405 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-config-data\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.822609 master-0 kubenswrapper[38936]: I0216 21:37:22.822433 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krrvp\" (UniqueName: \"kubernetes.io/projected/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-kube-api-access-krrvp\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.822609 master-0 kubenswrapper[38936]: I0216 21:37:22.822469 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.822609 master-0 kubenswrapper[38936]: I0216 21:37:22.822607 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-logs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.823332 master-0 kubenswrapper[38936]: I0216 21:37:22.822686 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-httpd-run\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.925317 master-0 kubenswrapper[38936]: I0216 21:37:22.925214 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-logs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.925317 master-0 kubenswrapper[38936]: I0216 21:37:22.925321 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-httpd-run\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.925579 master-0 kubenswrapper[38936]: I0216 21:37:22.925383 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-internal-tls-certs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.925579 master-0 kubenswrapper[38936]: I0216 21:37:22.925424 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-scripts\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.925579 master-0 kubenswrapper[38936]: I0216 21:37:22.925445 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-combined-ca-bundle\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.925579 master-0 kubenswrapper[38936]: I0216 21:37:22.925501 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-config-data\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.925579 master-0 kubenswrapper[38936]: I0216 21:37:22.925524 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krrvp\" (UniqueName: \"kubernetes.io/projected/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-kube-api-access-krrvp\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.925579 master-0 kubenswrapper[38936]: I0216 21:37:22.925551 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.926554 master-0 kubenswrapper[38936]: I0216 21:37:22.926509 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-logs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.928302 master-0 kubenswrapper[38936]: I0216 21:37:22.927733 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-httpd-run\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.932675 master-0 kubenswrapper[38936]: I0216 21:37:22.932063 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-scripts\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.933224 master-0 kubenswrapper[38936]: I0216 21:37:22.933192 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:37:22.933290 master-0 kubenswrapper[38936]: I0216 21:37:22.933226 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1b887ae194b0900377497cd58f52b4420ec6f7ec05c5eb1852be55020074fcad/globalmount\"" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.938499 master-0 kubenswrapper[38936]: I0216 21:37:22.937915 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-config-data\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.938499 master-0 kubenswrapper[38936]: I0216 21:37:22.938068 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-combined-ca-bundle\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.938625 master-0 kubenswrapper[38936]: I0216 21:37:22.938586 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-internal-tls-certs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:22.951041 master-0 kubenswrapper[38936]: I0216 21:37:22.950982 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krrvp\" (UniqueName: \"kubernetes.io/projected/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-kube-api-access-krrvp\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:23.910621 master-0 kubenswrapper[38936]: I0216 21:37:23.910526 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52974d79-df13-4786-9815-8c0689c7b8a8" path="/var/lib/kubelet/pods/52974d79-df13-4786-9815-8c0689c7b8a8/volumes" Feb 16 21:37:23.963546 master-0 kubenswrapper[38936]: I0216 21:37:23.963474 38936 generic.go:334] "Generic (PLEG): container finished" podID="432e676b-bab7-4991-8605-0eed0c6a0b2c" containerID="00a4fc3bbf18bc3cb9537dd8d4ec9038d4f790fa7ba1dea9e33affee71ef2a28" exitCode=0 Feb 16 21:37:23.963546 master-0 kubenswrapper[38936]: I0216 21:37:23.963528 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xxk4w" event={"ID":"432e676b-bab7-4991-8605-0eed0c6a0b2c","Type":"ContainerDied","Data":"00a4fc3bbf18bc3cb9537dd8d4ec9038d4f790fa7ba1dea9e33affee71ef2a28"} Feb 16 21:37:24.302699 master-0 kubenswrapper[38936]: I0216 21:37:24.300550 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:24.514941 master-0 kubenswrapper[38936]: I0216 21:37:24.514879 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d7ec-default-external-api-0"] Feb 16 21:37:24.524579 master-0 kubenswrapper[38936]: I0216 21:37:24.524530 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:24.990268 master-0 kubenswrapper[38936]: I0216 21:37:24.990137 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7xpzq" event={"ID":"eca578f4-3095-4770-9cdf-5702cdf8540b","Type":"ContainerStarted","Data":"9444f9aa2cbbf7b0f81da6a5fef4c6aa0d5757d030fe3d677ffbd067dd13ce8e"} Feb 16 21:37:24.993856 master-0 kubenswrapper[38936]: I0216 21:37:24.993748 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-external-api-0" event={"ID":"8318eb20-824e-49c4-87b3-36784a1fc4db","Type":"ContainerStarted","Data":"8a6dccdaf65fc9444bc3a4f5eb91f94e0ad72ed6e74bba96a27684b9e6f5378f"} Feb 16 21:37:25.118057 master-0 kubenswrapper[38936]: I0216 21:37:25.117807 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7xpzq" podStartSLOduration=3.070221878 podStartE2EDuration="8.117746193s" podCreationTimestamp="2026-02-16 21:37:17 +0000 UTC" firstStartedPulling="2026-02-16 21:37:18.92157315 +0000 UTC m=+869.273576512" lastFinishedPulling="2026-02-16 21:37:23.969097465 +0000 UTC m=+874.321100827" observedRunningTime="2026-02-16 21:37:25.109250663 +0000 UTC m=+875.461254025" watchObservedRunningTime="2026-02-16 21:37:25.117746193 +0000 UTC m=+875.469749555" Feb 16 21:37:25.187152 master-0 kubenswrapper[38936]: I0216 21:37:25.185782 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:37:25.197875 master-0 kubenswrapper[38936]: W0216 21:37:25.197796 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c40a4d_e01e_40ac_bd7e_c7056d2392f4.slice/crio-bfcd4dea2b2198d69ca0e3731733e72688b4186acb304307c7b054053c5a6843 WatchSource:0}: Error finding container bfcd4dea2b2198d69ca0e3731733e72688b4186acb304307c7b054053c5a6843: Status 404 returned error can't find the container with id bfcd4dea2b2198d69ca0e3731733e72688b4186acb304307c7b054053c5a6843 Feb 16 21:37:25.578564 master-0 kubenswrapper[38936]: I0216 21:37:25.578488 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:25.730922 master-0 kubenswrapper[38936]: I0216 21:37:25.730873 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-credential-keys\") pod \"432e676b-bab7-4991-8605-0eed0c6a0b2c\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " Feb 16 21:37:25.731094 master-0 kubenswrapper[38936]: I0216 21:37:25.730999 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-combined-ca-bundle\") pod \"432e676b-bab7-4991-8605-0eed0c6a0b2c\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " Feb 16 21:37:25.731094 master-0 kubenswrapper[38936]: I0216 21:37:25.731025 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h86dc\" (UniqueName: \"kubernetes.io/projected/432e676b-bab7-4991-8605-0eed0c6a0b2c-kube-api-access-h86dc\") pod \"432e676b-bab7-4991-8605-0eed0c6a0b2c\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " Feb 16 21:37:25.731094 master-0 kubenswrapper[38936]: I0216 21:37:25.731068 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-config-data\") pod \"432e676b-bab7-4991-8605-0eed0c6a0b2c\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " Feb 16 21:37:25.731203 master-0 kubenswrapper[38936]: I0216 21:37:25.731195 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-scripts\") pod \"432e676b-bab7-4991-8605-0eed0c6a0b2c\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " Feb 16 21:37:25.731264 master-0 kubenswrapper[38936]: I0216 21:37:25.731249 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-fernet-keys\") pod \"432e676b-bab7-4991-8605-0eed0c6a0b2c\" (UID: \"432e676b-bab7-4991-8605-0eed0c6a0b2c\") " Feb 16 21:37:25.737056 master-0 kubenswrapper[38936]: I0216 21:37:25.736966 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/432e676b-bab7-4991-8605-0eed0c6a0b2c-kube-api-access-h86dc" (OuterVolumeSpecName: "kube-api-access-h86dc") pod "432e676b-bab7-4991-8605-0eed0c6a0b2c" (UID: "432e676b-bab7-4991-8605-0eed0c6a0b2c"). InnerVolumeSpecName "kube-api-access-h86dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:25.737202 master-0 kubenswrapper[38936]: I0216 21:37:25.737105 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "432e676b-bab7-4991-8605-0eed0c6a0b2c" (UID: "432e676b-bab7-4991-8605-0eed0c6a0b2c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:25.737240 master-0 kubenswrapper[38936]: I0216 21:37:25.737218 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-scripts" (OuterVolumeSpecName: "scripts") pod "432e676b-bab7-4991-8605-0eed0c6a0b2c" (UID: "432e676b-bab7-4991-8605-0eed0c6a0b2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:25.740149 master-0 kubenswrapper[38936]: I0216 21:37:25.740096 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "432e676b-bab7-4991-8605-0eed0c6a0b2c" (UID: "432e676b-bab7-4991-8605-0eed0c6a0b2c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:25.828180 master-0 kubenswrapper[38936]: I0216 21:37:25.828120 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "432e676b-bab7-4991-8605-0eed0c6a0b2c" (UID: "432e676b-bab7-4991-8605-0eed0c6a0b2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:25.833685 master-0 kubenswrapper[38936]: I0216 21:37:25.833528 38936 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:25.833685 master-0 kubenswrapper[38936]: I0216 21:37:25.833572 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:25.833685 master-0 kubenswrapper[38936]: I0216 21:37:25.833584 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h86dc\" (UniqueName: \"kubernetes.io/projected/432e676b-bab7-4991-8605-0eed0c6a0b2c-kube-api-access-h86dc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:25.833685 master-0 kubenswrapper[38936]: I0216 21:37:25.833596 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:25.833685 master-0 kubenswrapper[38936]: I0216 21:37:25.833605 38936 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:25.848577 master-0 kubenswrapper[38936]: I0216 21:37:25.848516 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-config-data" (OuterVolumeSpecName: "config-data") pod "432e676b-bab7-4991-8605-0eed0c6a0b2c" (UID: "432e676b-bab7-4991-8605-0eed0c6a0b2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:25.937381 master-0 kubenswrapper[38936]: I0216 21:37:25.937319 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/432e676b-bab7-4991-8605-0eed0c6a0b2c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:26.012070 master-0 kubenswrapper[38936]: I0216 21:37:26.011985 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-external-api-0" event={"ID":"8318eb20-824e-49c4-87b3-36784a1fc4db","Type":"ContainerStarted","Data":"157e64f8cf2685cad3ffbb0a0891c2fa817cc2ba34fb2b694cca8cb0f39044c0"} Feb 16 21:37:26.012769 master-0 kubenswrapper[38936]: I0216 21:37:26.012080 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-external-api-0" event={"ID":"8318eb20-824e-49c4-87b3-36784a1fc4db","Type":"ContainerStarted","Data":"6f4e8719f5527ad2ee86d1f241ab0cc69e1583c0ea0856d330823fb8bceaa9db"} Feb 16 21:37:26.014128 master-0 kubenswrapper[38936]: I0216 21:37:26.014093 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xxk4w" event={"ID":"432e676b-bab7-4991-8605-0eed0c6a0b2c","Type":"ContainerDied","Data":"cbd535731474689f65c86c54ac7d517e49df5ea47d09bd8efd076347a66352e3"} Feb 16 21:37:26.014196 master-0 kubenswrapper[38936]: I0216 21:37:26.014130 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xxk4w" Feb 16 21:37:26.014287 master-0 kubenswrapper[38936]: I0216 21:37:26.014138 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbd535731474689f65c86c54ac7d517e49df5ea47d09bd8efd076347a66352e3" Feb 16 21:37:26.016156 master-0 kubenswrapper[38936]: I0216 21:37:26.016101 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-internal-api-0" event={"ID":"16c40a4d-e01e-40ac-bd7e-c7056d2392f4","Type":"ContainerStarted","Data":"39234f0cc3943afbdb3f4dfd525bd2d427fe5085c005694387daf45e1d641373"} Feb 16 21:37:26.016240 master-0 kubenswrapper[38936]: I0216 21:37:26.016162 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-internal-api-0" event={"ID":"16c40a4d-e01e-40ac-bd7e-c7056d2392f4","Type":"ContainerStarted","Data":"bfcd4dea2b2198d69ca0e3731733e72688b4186acb304307c7b054053c5a6843"} Feb 16 21:37:26.057263 master-0 kubenswrapper[38936]: I0216 21:37:26.057075 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1d7ec-default-external-api-0" podStartSLOduration=7.057055276 podStartE2EDuration="7.057055276s" podCreationTimestamp="2026-02-16 21:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:26.040633253 +0000 UTC m=+876.392636615" watchObservedRunningTime="2026-02-16 21:37:26.057055276 +0000 UTC m=+876.409058638" Feb 16 21:37:26.174004 master-0 kubenswrapper[38936]: I0216 21:37:26.172289 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xxk4w"] Feb 16 21:37:26.182244 master-0 kubenswrapper[38936]: I0216 21:37:26.182155 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xxk4w"] Feb 16 21:37:26.298621 master-0 kubenswrapper[38936]: I0216 21:37:26.298560 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-t4jt7"] Feb 16 21:37:26.299180 master-0 kubenswrapper[38936]: E0216 21:37:26.299153 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="432e676b-bab7-4991-8605-0eed0c6a0b2c" containerName="keystone-bootstrap" Feb 16 21:37:26.299180 master-0 kubenswrapper[38936]: I0216 21:37:26.299174 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="432e676b-bab7-4991-8605-0eed0c6a0b2c" containerName="keystone-bootstrap" Feb 16 21:37:26.299472 master-0 kubenswrapper[38936]: I0216 21:37:26.299448 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="432e676b-bab7-4991-8605-0eed0c6a0b2c" containerName="keystone-bootstrap" Feb 16 21:37:26.300391 master-0 kubenswrapper[38936]: I0216 21:37:26.300356 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.302691 master-0 kubenswrapper[38936]: I0216 21:37:26.302638 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:37:26.302783 master-0 kubenswrapper[38936]: I0216 21:37:26.302686 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:37:26.302783 master-0 kubenswrapper[38936]: I0216 21:37:26.302659 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 21:37:26.302881 master-0 kubenswrapper[38936]: I0216 21:37:26.302852 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:37:26.341547 master-0 kubenswrapper[38936]: I0216 21:37:26.341475 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t4jt7"] Feb 16 21:37:26.458042 master-0 kubenswrapper[38936]: I0216 21:37:26.457992 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-fernet-keys\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.458245 master-0 kubenswrapper[38936]: I0216 21:37:26.458059 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-scripts\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.458245 master-0 kubenswrapper[38936]: I0216 21:37:26.458119 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-credential-keys\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.458245 master-0 kubenswrapper[38936]: I0216 21:37:26.458199 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-config-data\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.458401 master-0 kubenswrapper[38936]: I0216 21:37:26.458287 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-combined-ca-bundle\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.458401 master-0 kubenswrapper[38936]: I0216 21:37:26.458381 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnjck\" (UniqueName: \"kubernetes.io/projected/b36f972a-247f-43a5-bf98-e27ab216ed04-kube-api-access-fnjck\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.561928 master-0 kubenswrapper[38936]: I0216 21:37:26.561673 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-credential-keys\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.562145 master-0 kubenswrapper[38936]: I0216 21:37:26.561949 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-config-data\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.562145 master-0 kubenswrapper[38936]: I0216 21:37:26.562022 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-combined-ca-bundle\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.562145 master-0 kubenswrapper[38936]: I0216 21:37:26.562082 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnjck\" (UniqueName: \"kubernetes.io/projected/b36f972a-247f-43a5-bf98-e27ab216ed04-kube-api-access-fnjck\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.562436 master-0 kubenswrapper[38936]: I0216 21:37:26.562401 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-fernet-keys\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.562499 master-0 kubenswrapper[38936]: I0216 21:37:26.562479 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-scripts\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.565724 master-0 kubenswrapper[38936]: I0216 21:37:26.565583 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-credential-keys\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.567223 master-0 kubenswrapper[38936]: I0216 21:37:26.567184 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-config-data\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.568088 master-0 kubenswrapper[38936]: I0216 21:37:26.568049 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-combined-ca-bundle\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.570398 master-0 kubenswrapper[38936]: I0216 21:37:26.570354 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-scripts\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.573308 master-0 kubenswrapper[38936]: I0216 21:37:26.573255 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-fernet-keys\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.581671 master-0 kubenswrapper[38936]: I0216 21:37:26.581609 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnjck\" (UniqueName: \"kubernetes.io/projected/b36f972a-247f-43a5-bf98-e27ab216ed04-kube-api-access-fnjck\") pod \"keystone-bootstrap-t4jt7\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:26.623569 master-0 kubenswrapper[38936]: I0216 21:37:26.623500 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:27.034697 master-0 kubenswrapper[38936]: I0216 21:37:27.034600 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-internal-api-0" event={"ID":"16c40a4d-e01e-40ac-bd7e-c7056d2392f4","Type":"ContainerStarted","Data":"06bf1f0bf2a217af9bd9e3932057f9c868db7c343ac1afea3392c2c3484b6523"} Feb 16 21:37:27.080177 master-0 kubenswrapper[38936]: I0216 21:37:27.080086 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1d7ec-default-internal-api-0" podStartSLOduration=5.08006023 podStartE2EDuration="5.08006023s" podCreationTimestamp="2026-02-16 21:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:27.066866883 +0000 UTC m=+877.418870255" watchObservedRunningTime="2026-02-16 21:37:27.08006023 +0000 UTC m=+877.432063592" Feb 16 21:37:27.112317 master-0 kubenswrapper[38936]: I0216 21:37:27.112243 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-t4jt7"] Feb 16 21:37:27.117530 master-0 kubenswrapper[38936]: W0216 21:37:27.117467 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb36f972a_247f_43a5_bf98_e27ab216ed04.slice/crio-eac6672232f007d62aaec1d2ed091ea7462ae6eb5c759e4f2786427b0af180d9 WatchSource:0}: Error finding container eac6672232f007d62aaec1d2ed091ea7462ae6eb5c759e4f2786427b0af180d9: Status 404 returned error can't find the container with id eac6672232f007d62aaec1d2ed091ea7462ae6eb5c759e4f2786427b0af180d9 Feb 16 21:37:27.382826 master-0 kubenswrapper[38936]: I0216 21:37:27.382397 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-nzcsn"] Feb 16 21:37:27.384705 master-0 kubenswrapper[38936]: I0216 21:37:27.384620 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.395371 master-0 kubenswrapper[38936]: I0216 21:37:27.392979 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Feb 16 21:37:27.395371 master-0 kubenswrapper[38936]: I0216 21:37:27.393088 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 16 21:37:27.405447 master-0 kubenswrapper[38936]: I0216 21:37:27.405379 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-nzcsn"] Feb 16 21:37:27.492753 master-0 kubenswrapper[38936]: I0216 21:37:27.489490 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5b1ea749-0e13-47db-bd37-4f269f872a0b-etc-podinfo\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.492753 master-0 kubenswrapper[38936]: I0216 21:37:27.489671 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data-merged\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.492753 master-0 kubenswrapper[38936]: I0216 21:37:27.489714 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjg4f\" (UniqueName: \"kubernetes.io/projected/5b1ea749-0e13-47db-bd37-4f269f872a0b-kube-api-access-zjg4f\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.492753 master-0 kubenswrapper[38936]: I0216 21:37:27.489937 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.492753 master-0 kubenswrapper[38936]: I0216 21:37:27.490234 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-combined-ca-bundle\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.492753 master-0 kubenswrapper[38936]: I0216 21:37:27.490333 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-scripts\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.592314 master-0 kubenswrapper[38936]: I0216 21:37:27.592153 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data-merged\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.592314 master-0 kubenswrapper[38936]: I0216 21:37:27.592234 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjg4f\" (UniqueName: \"kubernetes.io/projected/5b1ea749-0e13-47db-bd37-4f269f872a0b-kube-api-access-zjg4f\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.592851 master-0 kubenswrapper[38936]: I0216 21:37:27.592804 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data-merged\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.592937 master-0 kubenswrapper[38936]: I0216 21:37:27.592893 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.593353 master-0 kubenswrapper[38936]: I0216 21:37:27.593255 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-combined-ca-bundle\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.593430 master-0 kubenswrapper[38936]: I0216 21:37:27.593409 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-scripts\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.593613 master-0 kubenswrapper[38936]: I0216 21:37:27.593592 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5b1ea749-0e13-47db-bd37-4f269f872a0b-etc-podinfo\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.596965 master-0 kubenswrapper[38936]: I0216 21:37:27.596935 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-combined-ca-bundle\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.597425 master-0 kubenswrapper[38936]: I0216 21:37:27.597388 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-scripts\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.597644 master-0 kubenswrapper[38936]: I0216 21:37:27.597622 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5b1ea749-0e13-47db-bd37-4f269f872a0b-etc-podinfo\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.601447 master-0 kubenswrapper[38936]: I0216 21:37:27.599501 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.612808 master-0 kubenswrapper[38936]: I0216 21:37:27.612764 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjg4f\" (UniqueName: \"kubernetes.io/projected/5b1ea749-0e13-47db-bd37-4f269f872a0b-kube-api-access-zjg4f\") pod \"ironic-db-sync-nzcsn\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.726754 master-0 kubenswrapper[38936]: I0216 21:37:27.726696 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:37:27.898217 master-0 kubenswrapper[38936]: I0216 21:37:27.896473 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="432e676b-bab7-4991-8605-0eed0c6a0b2c" path="/var/lib/kubelet/pods/432e676b-bab7-4991-8605-0eed0c6a0b2c/volumes" Feb 16 21:37:28.049751 master-0 kubenswrapper[38936]: I0216 21:37:28.049688 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4jt7" event={"ID":"b36f972a-247f-43a5-bf98-e27ab216ed04","Type":"ContainerStarted","Data":"eceeec25839a3d4e7215ffc0901f59f65e65dddb4a33cf4900f6cbbe8f4d7b38"} Feb 16 21:37:28.049751 master-0 kubenswrapper[38936]: I0216 21:37:28.049748 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4jt7" event={"ID":"b36f972a-247f-43a5-bf98-e27ab216ed04","Type":"ContainerStarted","Data":"eac6672232f007d62aaec1d2ed091ea7462ae6eb5c759e4f2786427b0af180d9"} Feb 16 21:37:28.053504 master-0 kubenswrapper[38936]: I0216 21:37:28.053452 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:37:28.091909 master-0 kubenswrapper[38936]: I0216 21:37:28.091630 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-t4jt7" podStartSLOduration=2.091608464 podStartE2EDuration="2.091608464s" podCreationTimestamp="2026-02-16 21:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:28.077128833 +0000 UTC m=+878.429132345" watchObservedRunningTime="2026-02-16 21:37:28.091608464 +0000 UTC m=+878.443611826" Feb 16 21:37:28.379266 master-0 kubenswrapper[38936]: I0216 21:37:28.379222 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-nzcsn"] Feb 16 21:37:28.580762 master-0 kubenswrapper[38936]: I0216 21:37:28.580678 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-665cc5d59f-ngldr"] Feb 16 21:37:28.581045 master-0 kubenswrapper[38936]: I0216 21:37:28.580992 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" podUID="0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" containerName="dnsmasq-dns" containerID="cri-o://cd1aa9428bd03e10b081fbb04aaff88d0fd1bb75b067e1e5ddbd3a82235b968d" gracePeriod=10 Feb 16 21:37:29.076585 master-0 kubenswrapper[38936]: I0216 21:37:29.076513 38936 generic.go:334] "Generic (PLEG): container finished" podID="eca578f4-3095-4770-9cdf-5702cdf8540b" containerID="9444f9aa2cbbf7b0f81da6a5fef4c6aa0d5757d030fe3d677ffbd067dd13ce8e" exitCode=0 Feb 16 21:37:29.077225 master-0 kubenswrapper[38936]: I0216 21:37:29.076602 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7xpzq" event={"ID":"eca578f4-3095-4770-9cdf-5702cdf8540b","Type":"ContainerDied","Data":"9444f9aa2cbbf7b0f81da6a5fef4c6aa0d5757d030fe3d677ffbd067dd13ce8e"} Feb 16 21:37:29.081257 master-0 kubenswrapper[38936]: I0216 21:37:29.081172 38936 generic.go:334] "Generic (PLEG): container finished" podID="0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" containerID="cd1aa9428bd03e10b081fbb04aaff88d0fd1bb75b067e1e5ddbd3a82235b968d" exitCode=0 Feb 16 21:37:29.081488 master-0 kubenswrapper[38936]: I0216 21:37:29.081368 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" event={"ID":"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b","Type":"ContainerDied","Data":"cd1aa9428bd03e10b081fbb04aaff88d0fd1bb75b067e1e5ddbd3a82235b968d"} Feb 16 21:37:29.867588 master-0 kubenswrapper[38936]: I0216 21:37:29.867503 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" podUID="0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.198:5353: connect: connection refused" Feb 16 21:37:31.115956 master-0 kubenswrapper[38936]: I0216 21:37:31.115887 38936 generic.go:334] "Generic (PLEG): container finished" podID="b36f972a-247f-43a5-bf98-e27ab216ed04" containerID="eceeec25839a3d4e7215ffc0901f59f65e65dddb4a33cf4900f6cbbe8f4d7b38" exitCode=0 Feb 16 21:37:31.115956 master-0 kubenswrapper[38936]: I0216 21:37:31.115951 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4jt7" event={"ID":"b36f972a-247f-43a5-bf98-e27ab216ed04","Type":"ContainerDied","Data":"eceeec25839a3d4e7215ffc0901f59f65e65dddb4a33cf4900f6cbbe8f4d7b38"} Feb 16 21:37:32.660378 master-0 kubenswrapper[38936]: I0216 21:37:32.660304 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:32.661085 master-0 kubenswrapper[38936]: I0216 21:37:32.661070 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:32.693935 master-0 kubenswrapper[38936]: I0216 21:37:32.693418 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:32.712279 master-0 kubenswrapper[38936]: I0216 21:37:32.712220 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:33.138151 master-0 kubenswrapper[38936]: I0216 21:37:33.138077 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:33.138151 master-0 kubenswrapper[38936]: I0216 21:37:33.138150 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:34.525718 master-0 kubenswrapper[38936]: I0216 21:37:34.525661 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:34.525718 master-0 kubenswrapper[38936]: I0216 21:37:34.525725 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:34.567798 master-0 kubenswrapper[38936]: I0216 21:37:34.567736 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:34.579013 master-0 kubenswrapper[38936]: I0216 21:37:34.578961 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:34.867879 master-0 kubenswrapper[38936]: I0216 21:37:34.867718 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" podUID="0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.198:5353: connect: connection refused" Feb 16 21:37:35.160983 master-0 kubenswrapper[38936]: I0216 21:37:35.160828 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:35.161197 master-0 kubenswrapper[38936]: I0216 21:37:35.161105 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:35.164053 master-0 kubenswrapper[38936]: I0216 21:37:35.164006 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:35.164511 master-0 kubenswrapper[38936]: I0216 21:37:35.164109 38936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:37:35.278517 master-0 kubenswrapper[38936]: I0216 21:37:35.278408 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:37:35.925903 master-0 kubenswrapper[38936]: W0216 21:37:35.925857 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b1ea749_0e13_47db_bd37_4f269f872a0b.slice/crio-b800abc5d700ab672c6a9a9e70764e7efbb13bcb553114058cc9f5df34e3ba5e WatchSource:0}: Error finding container b800abc5d700ab672c6a9a9e70764e7efbb13bcb553114058cc9f5df34e3ba5e: Status 404 returned error can't find the container with id b800abc5d700ab672c6a9a9e70764e7efbb13bcb553114058cc9f5df34e3ba5e Feb 16 21:37:36.037832 master-0 kubenswrapper[38936]: I0216 21:37:36.037765 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:36.048151 master-0 kubenswrapper[38936]: I0216 21:37:36.048094 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:36.082618 master-0 kubenswrapper[38936]: I0216 21:37:36.082557 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca578f4-3095-4770-9cdf-5702cdf8540b-logs\") pod \"eca578f4-3095-4770-9cdf-5702cdf8540b\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " Feb 16 21:37:36.082948 master-0 kubenswrapper[38936]: I0216 21:37:36.082828 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkqq4\" (UniqueName: \"kubernetes.io/projected/eca578f4-3095-4770-9cdf-5702cdf8540b-kube-api-access-nkqq4\") pod \"eca578f4-3095-4770-9cdf-5702cdf8540b\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " Feb 16 21:37:36.082948 master-0 kubenswrapper[38936]: I0216 21:37:36.082883 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-combined-ca-bundle\") pod \"eca578f4-3095-4770-9cdf-5702cdf8540b\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " Feb 16 21:37:36.083050 master-0 kubenswrapper[38936]: I0216 21:37:36.082975 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-scripts\") pod \"eca578f4-3095-4770-9cdf-5702cdf8540b\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " Feb 16 21:37:36.083111 master-0 kubenswrapper[38936]: I0216 21:37:36.083077 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-config-data\") pod \"eca578f4-3095-4770-9cdf-5702cdf8540b\" (UID: \"eca578f4-3095-4770-9cdf-5702cdf8540b\") " Feb 16 21:37:36.085151 master-0 kubenswrapper[38936]: I0216 21:37:36.084708 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eca578f4-3095-4770-9cdf-5702cdf8540b-logs" (OuterVolumeSpecName: "logs") pod "eca578f4-3095-4770-9cdf-5702cdf8540b" (UID: "eca578f4-3095-4770-9cdf-5702cdf8540b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:37:36.085151 master-0 kubenswrapper[38936]: I0216 21:37:36.085017 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eca578f4-3095-4770-9cdf-5702cdf8540b-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:36.086636 master-0 kubenswrapper[38936]: I0216 21:37:36.086608 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca578f4-3095-4770-9cdf-5702cdf8540b-kube-api-access-nkqq4" (OuterVolumeSpecName: "kube-api-access-nkqq4") pod "eca578f4-3095-4770-9cdf-5702cdf8540b" (UID: "eca578f4-3095-4770-9cdf-5702cdf8540b"). InnerVolumeSpecName "kube-api-access-nkqq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:36.091884 master-0 kubenswrapper[38936]: I0216 21:37:36.091823 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-scripts" (OuterVolumeSpecName: "scripts") pod "eca578f4-3095-4770-9cdf-5702cdf8540b" (UID: "eca578f4-3095-4770-9cdf-5702cdf8540b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:36.112274 master-0 kubenswrapper[38936]: I0216 21:37:36.112189 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-config-data" (OuterVolumeSpecName: "config-data") pod "eca578f4-3095-4770-9cdf-5702cdf8540b" (UID: "eca578f4-3095-4770-9cdf-5702cdf8540b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:36.115063 master-0 kubenswrapper[38936]: I0216 21:37:36.114999 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eca578f4-3095-4770-9cdf-5702cdf8540b" (UID: "eca578f4-3095-4770-9cdf-5702cdf8540b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:36.186108 master-0 kubenswrapper[38936]: I0216 21:37:36.185893 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-nzcsn" event={"ID":"5b1ea749-0e13-47db-bd37-4f269f872a0b","Type":"ContainerStarted","Data":"b800abc5d700ab672c6a9a9e70764e7efbb13bcb553114058cc9f5df34e3ba5e"} Feb 16 21:37:36.187913 master-0 kubenswrapper[38936]: I0216 21:37:36.187870 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-scripts\") pod \"b36f972a-247f-43a5-bf98-e27ab216ed04\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " Feb 16 21:37:36.188168 master-0 kubenswrapper[38936]: I0216 21:37:36.188122 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-t4jt7" Feb 16 21:37:36.189431 master-0 kubenswrapper[38936]: I0216 21:37:36.188052 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-t4jt7" event={"ID":"b36f972a-247f-43a5-bf98-e27ab216ed04","Type":"ContainerDied","Data":"eac6672232f007d62aaec1d2ed091ea7462ae6eb5c759e4f2786427b0af180d9"} Feb 16 21:37:36.189431 master-0 kubenswrapper[38936]: I0216 21:37:36.189341 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eac6672232f007d62aaec1d2ed091ea7462ae6eb5c759e4f2786427b0af180d9" Feb 16 21:37:36.192086 master-0 kubenswrapper[38936]: I0216 21:37:36.191830 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7xpzq" Feb 16 21:37:36.192086 master-0 kubenswrapper[38936]: I0216 21:37:36.191833 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7xpzq" event={"ID":"eca578f4-3095-4770-9cdf-5702cdf8540b","Type":"ContainerDied","Data":"d47ee8cbdf7acec7e2797cac38a82ae59edf95b65b9d640590c9cd65c2f242bc"} Feb 16 21:37:36.192086 master-0 kubenswrapper[38936]: I0216 21:37:36.191989 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d47ee8cbdf7acec7e2797cac38a82ae59edf95b65b9d640590c9cd65c2f242bc" Feb 16 21:37:36.195878 master-0 kubenswrapper[38936]: I0216 21:37:36.192683 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnjck\" (UniqueName: \"kubernetes.io/projected/b36f972a-247f-43a5-bf98-e27ab216ed04-kube-api-access-fnjck\") pod \"b36f972a-247f-43a5-bf98-e27ab216ed04\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " Feb 16 21:37:36.195878 master-0 kubenswrapper[38936]: I0216 21:37:36.192723 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-combined-ca-bundle\") pod \"b36f972a-247f-43a5-bf98-e27ab216ed04\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " Feb 16 21:37:36.195878 master-0 kubenswrapper[38936]: I0216 21:37:36.192762 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-fernet-keys\") pod \"b36f972a-247f-43a5-bf98-e27ab216ed04\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " Feb 16 21:37:36.195878 master-0 kubenswrapper[38936]: I0216 21:37:36.192863 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-config-data\") pod \"b36f972a-247f-43a5-bf98-e27ab216ed04\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " Feb 16 21:37:36.195878 master-0 kubenswrapper[38936]: I0216 21:37:36.192941 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-credential-keys\") pod \"b36f972a-247f-43a5-bf98-e27ab216ed04\" (UID: \"b36f972a-247f-43a5-bf98-e27ab216ed04\") " Feb 16 21:37:36.195878 master-0 kubenswrapper[38936]: I0216 21:37:36.193938 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:36.195878 master-0 kubenswrapper[38936]: I0216 21:37:36.193954 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:36.195878 master-0 kubenswrapper[38936]: I0216 21:37:36.193966 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkqq4\" (UniqueName: \"kubernetes.io/projected/eca578f4-3095-4770-9cdf-5702cdf8540b-kube-api-access-nkqq4\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:36.195878 master-0 kubenswrapper[38936]: I0216 21:37:36.193977 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eca578f4-3095-4770-9cdf-5702cdf8540b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:36.233854 master-0 kubenswrapper[38936]: I0216 21:37:36.233640 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-scripts" (OuterVolumeSpecName: "scripts") pod "b36f972a-247f-43a5-bf98-e27ab216ed04" (UID: "b36f972a-247f-43a5-bf98-e27ab216ed04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:36.233854 master-0 kubenswrapper[38936]: I0216 21:37:36.233640 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36f972a-247f-43a5-bf98-e27ab216ed04-kube-api-access-fnjck" (OuterVolumeSpecName: "kube-api-access-fnjck") pod "b36f972a-247f-43a5-bf98-e27ab216ed04" (UID: "b36f972a-247f-43a5-bf98-e27ab216ed04"). InnerVolumeSpecName "kube-api-access-fnjck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:36.234133 master-0 kubenswrapper[38936]: I0216 21:37:36.234036 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b36f972a-247f-43a5-bf98-e27ab216ed04" (UID: "b36f972a-247f-43a5-bf98-e27ab216ed04"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:36.257463 master-0 kubenswrapper[38936]: I0216 21:37:36.256765 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b36f972a-247f-43a5-bf98-e27ab216ed04" (UID: "b36f972a-247f-43a5-bf98-e27ab216ed04"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:36.276831 master-0 kubenswrapper[38936]: I0216 21:37:36.276620 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b36f972a-247f-43a5-bf98-e27ab216ed04" (UID: "b36f972a-247f-43a5-bf98-e27ab216ed04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:36.285686 master-0 kubenswrapper[38936]: I0216 21:37:36.285451 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-config-data" (OuterVolumeSpecName: "config-data") pod "b36f972a-247f-43a5-bf98-e27ab216ed04" (UID: "b36f972a-247f-43a5-bf98-e27ab216ed04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:36.300867 master-0 kubenswrapper[38936]: I0216 21:37:36.300807 38936 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:36.301791 master-0 kubenswrapper[38936]: I0216 21:37:36.301776 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:36.302141 master-0 kubenswrapper[38936]: I0216 21:37:36.302123 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnjck\" (UniqueName: \"kubernetes.io/projected/b36f972a-247f-43a5-bf98-e27ab216ed04-kube-api-access-fnjck\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:36.302476 master-0 kubenswrapper[38936]: I0216 21:37:36.302459 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:36.302701 master-0 kubenswrapper[38936]: I0216 21:37:36.302683 38936 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:36.309921 master-0 kubenswrapper[38936]: I0216 21:37:36.309887 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36f972a-247f-43a5-bf98-e27ab216ed04-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:37.673669 master-0 kubenswrapper[38936]: I0216 21:37:37.673522 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:37.674306 master-0 kubenswrapper[38936]: I0216 21:37:37.673758 38936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:37:37.678734 master-0 kubenswrapper[38936]: I0216 21:37:37.675718 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:37:37.919391 master-0 kubenswrapper[38936]: I0216 21:37:37.919289 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:37:37.974293 master-0 kubenswrapper[38936]: I0216 21:37:37.974225 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-sb\") pod \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " Feb 16 21:37:37.974473 master-0 kubenswrapper[38936]: I0216 21:37:37.974304 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-swift-storage-0\") pod \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " Feb 16 21:37:37.974473 master-0 kubenswrapper[38936]: I0216 21:37:37.974351 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-nb\") pod \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " Feb 16 21:37:37.974473 master-0 kubenswrapper[38936]: I0216 21:37:37.974380 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-config\") pod \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " Feb 16 21:37:37.974593 master-0 kubenswrapper[38936]: I0216 21:37:37.974491 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-svc\") pod \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " Feb 16 21:37:37.974593 master-0 kubenswrapper[38936]: I0216 21:37:37.974551 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvm2m\" (UniqueName: \"kubernetes.io/projected/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-kube-api-access-bvm2m\") pod \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\" (UID: \"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b\") " Feb 16 21:37:37.985639 master-0 kubenswrapper[38936]: I0216 21:37:37.984352 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-kube-api-access-bvm2m" (OuterVolumeSpecName: "kube-api-access-bvm2m") pod "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" (UID: "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b"). InnerVolumeSpecName "kube-api-access-bvm2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:38.069036 master-0 kubenswrapper[38936]: I0216 21:37:38.067949 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" (UID: "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:38.077847 master-0 kubenswrapper[38936]: I0216 21:37:38.077277 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-config" (OuterVolumeSpecName: "config") pod "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" (UID: "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:38.081858 master-0 kubenswrapper[38936]: I0216 21:37:38.081786 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:38.081858 master-0 kubenswrapper[38936]: I0216 21:37:38.081836 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:38.081858 master-0 kubenswrapper[38936]: I0216 21:37:38.081849 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvm2m\" (UniqueName: \"kubernetes.io/projected/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-kube-api-access-bvm2m\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:38.092412 master-0 kubenswrapper[38936]: I0216 21:37:38.092233 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" (UID: "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:38.093543 master-0 kubenswrapper[38936]: I0216 21:37:38.093485 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" (UID: "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:38.138446 master-0 kubenswrapper[38936]: I0216 21:37:38.138372 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" (UID: "0c96efb0-abf1-496e-adc5-bdef4f9a9d1b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:38.185305 master-0 kubenswrapper[38936]: I0216 21:37:38.185172 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:38.185305 master-0 kubenswrapper[38936]: I0216 21:37:38.185237 38936 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:38.185305 master-0 kubenswrapper[38936]: I0216 21:37:38.185252 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:38.217315 master-0 kubenswrapper[38936]: I0216 21:37:38.217263 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-95b8b778-clhph"] Feb 16 21:37:38.218916 master-0 kubenswrapper[38936]: I0216 21:37:38.218042 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" Feb 16 21:37:38.218916 master-0 kubenswrapper[38936]: E0216 21:37:38.218258 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36f972a-247f-43a5-bf98-e27ab216ed04" containerName="keystone-bootstrap" Feb 16 21:37:38.218916 master-0 kubenswrapper[38936]: I0216 21:37:38.218282 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36f972a-247f-43a5-bf98-e27ab216ed04" containerName="keystone-bootstrap" Feb 16 21:37:38.218916 master-0 kubenswrapper[38936]: E0216 21:37:38.218342 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca578f4-3095-4770-9cdf-5702cdf8540b" containerName="placement-db-sync" Feb 16 21:37:38.218916 master-0 kubenswrapper[38936]: I0216 21:37:38.218354 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca578f4-3095-4770-9cdf-5702cdf8540b" containerName="placement-db-sync" Feb 16 21:37:38.218916 master-0 kubenswrapper[38936]: E0216 21:37:38.218375 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" containerName="dnsmasq-dns" Feb 16 21:37:38.218916 master-0 kubenswrapper[38936]: I0216 21:37:38.218383 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" containerName="dnsmasq-dns" Feb 16 21:37:38.218916 master-0 kubenswrapper[38936]: E0216 21:37:38.218435 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" containerName="init" Feb 16 21:37:38.218916 master-0 kubenswrapper[38936]: I0216 21:37:38.218445 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" containerName="init" Feb 16 21:37:38.219276 master-0 kubenswrapper[38936]: I0216 21:37:38.218949 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca578f4-3095-4770-9cdf-5702cdf8540b" containerName="placement-db-sync" Feb 16 21:37:38.219276 master-0 kubenswrapper[38936]: I0216 21:37:38.219008 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36f972a-247f-43a5-bf98-e27ab216ed04" containerName="keystone-bootstrap" Feb 16 21:37:38.219276 master-0 kubenswrapper[38936]: I0216 21:37:38.219093 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" containerName="dnsmasq-dns" Feb 16 21:37:38.220686 master-0 kubenswrapper[38936]: I0216 21:37:38.220620 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7768cbd466-2k4r9"] Feb 16 21:37:38.221402 master-0 kubenswrapper[38936]: I0216 21:37:38.220847 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.223242 master-0 kubenswrapper[38936]: I0216 21:37:38.222950 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 21:37:38.223722 master-0 kubenswrapper[38936]: I0216 21:37:38.223689 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 21:37:38.224067 master-0 kubenswrapper[38936]: I0216 21:37:38.224038 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 21:37:38.224222 master-0 kubenswrapper[38936]: I0216 21:37:38.224196 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 21:37:38.224342 master-0 kubenswrapper[38936]: I0216 21:37:38.224321 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 21:37:38.228562 master-0 kubenswrapper[38936]: I0216 21:37:38.227955 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7768cbd466-2k4r9"] Feb 16 21:37:38.228562 master-0 kubenswrapper[38936]: I0216 21:37:38.228115 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.228562 master-0 kubenswrapper[38936]: I0216 21:37:38.228107 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-665cc5d59f-ngldr" event={"ID":"0c96efb0-abf1-496e-adc5-bdef4f9a9d1b","Type":"ContainerDied","Data":"6b9e4edd0600251ed1010926570a07fcf50fe8870c368298163c86858d773b29"} Feb 16 21:37:38.228562 master-0 kubenswrapper[38936]: I0216 21:37:38.228453 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-95b8b778-clhph"] Feb 16 21:37:38.228562 master-0 kubenswrapper[38936]: I0216 21:37:38.228484 38936 scope.go:117] "RemoveContainer" containerID="cd1aa9428bd03e10b081fbb04aaff88d0fd1bb75b067e1e5ddbd3a82235b968d" Feb 16 21:37:38.230458 master-0 kubenswrapper[38936]: I0216 21:37:38.230268 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 21:37:38.230458 master-0 kubenswrapper[38936]: I0216 21:37:38.230315 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 21:37:38.230458 master-0 kubenswrapper[38936]: I0216 21:37:38.230315 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 21:37:38.230608 master-0 kubenswrapper[38936]: I0216 21:37:38.230505 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 21:37:38.278502 master-0 kubenswrapper[38936]: I0216 21:37:38.278454 38936 scope.go:117] "RemoveContainer" containerID="eccc3bfa5f692357557d2616613174dd81ca2daec309409325d69539556fe983" Feb 16 21:37:38.305895 master-0 kubenswrapper[38936]: I0216 21:37:38.305818 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-665cc5d59f-ngldr"] Feb 16 21:37:38.318454 master-0 kubenswrapper[38936]: I0216 21:37:38.318402 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-665cc5d59f-ngldr"] Feb 16 21:37:38.391680 master-0 kubenswrapper[38936]: I0216 21:37:38.391595 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-public-tls-certs\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.391964 master-0 kubenswrapper[38936]: I0216 21:37:38.391845 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-combined-ca-bundle\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.391964 master-0 kubenswrapper[38936]: I0216 21:37:38.391893 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-scripts\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.392044 master-0 kubenswrapper[38936]: I0216 21:37:38.391927 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6fh\" (UniqueName: \"kubernetes.io/projected/503a8006-2dfc-4c9a-80b8-373ca3bf5734-kube-api-access-dz6fh\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.392124 master-0 kubenswrapper[38936]: I0216 21:37:38.392091 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-fernet-keys\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.392172 master-0 kubenswrapper[38936]: I0216 21:37:38.392127 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-credential-keys\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.392219 master-0 kubenswrapper[38936]: I0216 21:37:38.392165 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-public-tls-certs\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.392290 master-0 kubenswrapper[38936]: I0216 21:37:38.392265 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-combined-ca-bundle\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.392353 master-0 kubenswrapper[38936]: I0216 21:37:38.392332 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-scripts\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.392621 master-0 kubenswrapper[38936]: I0216 21:37:38.392389 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-internal-tls-certs\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.392693 master-0 kubenswrapper[38936]: I0216 21:37:38.392636 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-internal-tls-certs\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.392782 master-0 kubenswrapper[38936]: I0216 21:37:38.392756 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-config-data\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.398383 master-0 kubenswrapper[38936]: I0216 21:37:38.392810 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-config-data\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.399521 master-0 kubenswrapper[38936]: I0216 21:37:38.398743 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c52859-2457-4148-b87b-c6d552a3be73-logs\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.401632 master-0 kubenswrapper[38936]: I0216 21:37:38.401558 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dvb\" (UniqueName: \"kubernetes.io/projected/96c52859-2457-4148-b87b-c6d552a3be73-kube-api-access-x6dvb\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.505054 master-0 kubenswrapper[38936]: I0216 21:37:38.504958 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-scripts\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.505548 master-0 kubenswrapper[38936]: I0216 21:37:38.505117 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-internal-tls-certs\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.505600 master-0 kubenswrapper[38936]: I0216 21:37:38.505557 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-internal-tls-certs\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.505637 master-0 kubenswrapper[38936]: I0216 21:37:38.505620 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-config-data\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.505699 master-0 kubenswrapper[38936]: I0216 21:37:38.505665 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-config-data\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.505758 master-0 kubenswrapper[38936]: I0216 21:37:38.505731 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c52859-2457-4148-b87b-c6d552a3be73-logs\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.506377 master-0 kubenswrapper[38936]: I0216 21:37:38.506346 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6dvb\" (UniqueName: \"kubernetes.io/projected/96c52859-2457-4148-b87b-c6d552a3be73-kube-api-access-x6dvb\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.506438 master-0 kubenswrapper[38936]: I0216 21:37:38.506427 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-public-tls-certs\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.506490 master-0 kubenswrapper[38936]: I0216 21:37:38.506465 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-combined-ca-bundle\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.506490 master-0 kubenswrapper[38936]: I0216 21:37:38.506484 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-scripts\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.506569 master-0 kubenswrapper[38936]: I0216 21:37:38.506511 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6fh\" (UniqueName: \"kubernetes.io/projected/503a8006-2dfc-4c9a-80b8-373ca3bf5734-kube-api-access-dz6fh\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.506569 master-0 kubenswrapper[38936]: I0216 21:37:38.506545 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-fernet-keys\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.506569 master-0 kubenswrapper[38936]: I0216 21:37:38.506564 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-credential-keys\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.506691 master-0 kubenswrapper[38936]: I0216 21:37:38.506593 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-public-tls-certs\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.506691 master-0 kubenswrapper[38936]: I0216 21:37:38.506629 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-combined-ca-bundle\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.509631 master-0 kubenswrapper[38936]: I0216 21:37:38.509601 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-scripts\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.509874 master-0 kubenswrapper[38936]: I0216 21:37:38.509830 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-internal-tls-certs\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.510405 master-0 kubenswrapper[38936]: I0216 21:37:38.510377 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-combined-ca-bundle\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.510752 master-0 kubenswrapper[38936]: I0216 21:37:38.510729 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c52859-2457-4148-b87b-c6d552a3be73-logs\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.511701 master-0 kubenswrapper[38936]: I0216 21:37:38.511663 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-internal-tls-certs\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.511762 master-0 kubenswrapper[38936]: I0216 21:37:38.511722 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-public-tls-certs\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.512328 master-0 kubenswrapper[38936]: I0216 21:37:38.512305 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-config-data\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.513380 master-0 kubenswrapper[38936]: I0216 21:37:38.513357 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-scripts\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.514942 master-0 kubenswrapper[38936]: I0216 21:37:38.514919 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-public-tls-certs\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.515789 master-0 kubenswrapper[38936]: I0216 21:37:38.515755 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-credential-keys\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.520333 master-0 kubenswrapper[38936]: I0216 21:37:38.520299 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-combined-ca-bundle\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.524512 master-0 kubenswrapper[38936]: I0216 21:37:38.524312 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6fh\" (UniqueName: \"kubernetes.io/projected/503a8006-2dfc-4c9a-80b8-373ca3bf5734-kube-api-access-dz6fh\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.524512 master-0 kubenswrapper[38936]: I0216 21:37:38.524324 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-fernet-keys\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.524512 master-0 kubenswrapper[38936]: I0216 21:37:38.524317 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503a8006-2dfc-4c9a-80b8-373ca3bf5734-config-data\") pod \"keystone-95b8b778-clhph\" (UID: \"503a8006-2dfc-4c9a-80b8-373ca3bf5734\") " pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.527996 master-0 kubenswrapper[38936]: I0216 21:37:38.527953 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6dvb\" (UniqueName: \"kubernetes.io/projected/96c52859-2457-4148-b87b-c6d552a3be73-kube-api-access-x6dvb\") pod \"placement-7768cbd466-2k4r9\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:38.558251 master-0 kubenswrapper[38936]: I0216 21:37:38.557701 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:38.573844 master-0 kubenswrapper[38936]: I0216 21:37:38.573790 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:39.180212 master-0 kubenswrapper[38936]: I0216 21:37:39.180144 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-95b8b778-clhph"] Feb 16 21:37:39.186041 master-0 kubenswrapper[38936]: W0216 21:37:39.185971 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod503a8006_2dfc_4c9a_80b8_373ca3bf5734.slice/crio-6c9cfb1111beb5bc5a58aa24642d68f0bf4e02582ee734d0c015cd9da4a2ba7b WatchSource:0}: Error finding container 6c9cfb1111beb5bc5a58aa24642d68f0bf4e02582ee734d0c015cd9da4a2ba7b: Status 404 returned error can't find the container with id 6c9cfb1111beb5bc5a58aa24642d68f0bf4e02582ee734d0c015cd9da4a2ba7b Feb 16 21:37:39.243104 master-0 kubenswrapper[38936]: I0216 21:37:39.243059 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-95b8b778-clhph" event={"ID":"503a8006-2dfc-4c9a-80b8-373ca3bf5734","Type":"ContainerStarted","Data":"6c9cfb1111beb5bc5a58aa24642d68f0bf4e02582ee734d0c015cd9da4a2ba7b"} Feb 16 21:37:39.246059 master-0 kubenswrapper[38936]: I0216 21:37:39.245094 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-db-sync-r9pqq" event={"ID":"a7c56a35-a711-40a4-9428-031faf014af4","Type":"ContainerStarted","Data":"08a59bdb5d4aefa117bebbfc965bdee02d689fff50e2eea2a412b937e493f40f"} Feb 16 21:37:39.274606 master-0 kubenswrapper[38936]: I0216 21:37:39.274500 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9c692-db-sync-r9pqq" podStartSLOduration=3.41609172 podStartE2EDuration="22.274480149s" podCreationTimestamp="2026-02-16 21:37:17 +0000 UTC" firstStartedPulling="2026-02-16 21:37:19.109482436 +0000 UTC m=+869.461485798" lastFinishedPulling="2026-02-16 21:37:37.967870865 +0000 UTC m=+888.319874227" observedRunningTime="2026-02-16 21:37:39.270565644 +0000 UTC m=+889.622569016" watchObservedRunningTime="2026-02-16 21:37:39.274480149 +0000 UTC m=+889.626483511" Feb 16 21:37:39.327993 master-0 kubenswrapper[38936]: W0216 21:37:39.327830 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96c52859_2457_4148_b87b_c6d552a3be73.slice/crio-a7a85301fa27089ba2f03e75f4bf929300af3a5d721fb07a4515c49970bc71f4 WatchSource:0}: Error finding container a7a85301fa27089ba2f03e75f4bf929300af3a5d721fb07a4515c49970bc71f4: Status 404 returned error can't find the container with id a7a85301fa27089ba2f03e75f4bf929300af3a5d721fb07a4515c49970bc71f4 Feb 16 21:37:39.333123 master-0 kubenswrapper[38936]: I0216 21:37:39.333050 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7768cbd466-2k4r9"] Feb 16 21:37:39.893048 master-0 kubenswrapper[38936]: I0216 21:37:39.892983 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c96efb0-abf1-496e-adc5-bdef4f9a9d1b" path="/var/lib/kubelet/pods/0c96efb0-abf1-496e-adc5-bdef4f9a9d1b/volumes" Feb 16 21:37:40.279002 master-0 kubenswrapper[38936]: I0216 21:37:40.278850 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7768cbd466-2k4r9" event={"ID":"96c52859-2457-4148-b87b-c6d552a3be73","Type":"ContainerStarted","Data":"9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733"} Feb 16 21:37:40.279002 master-0 kubenswrapper[38936]: I0216 21:37:40.278975 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7768cbd466-2k4r9" event={"ID":"96c52859-2457-4148-b87b-c6d552a3be73","Type":"ContainerStarted","Data":"7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62"} Feb 16 21:37:40.279002 master-0 kubenswrapper[38936]: I0216 21:37:40.278990 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7768cbd466-2k4r9" event={"ID":"96c52859-2457-4148-b87b-c6d552a3be73","Type":"ContainerStarted","Data":"a7a85301fa27089ba2f03e75f4bf929300af3a5d721fb07a4515c49970bc71f4"} Feb 16 21:37:40.281676 master-0 kubenswrapper[38936]: I0216 21:37:40.279359 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:40.281676 master-0 kubenswrapper[38936]: I0216 21:37:40.280007 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:37:40.281865 master-0 kubenswrapper[38936]: I0216 21:37:40.281790 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-95b8b778-clhph" event={"ID":"503a8006-2dfc-4c9a-80b8-373ca3bf5734","Type":"ContainerStarted","Data":"ea8a55dcac0d1641727c78d5e1d68e9508739dc18cd70f3aa5adefb85ae377a9"} Feb 16 21:37:40.346692 master-0 kubenswrapper[38936]: I0216 21:37:40.342004 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7768cbd466-2k4r9" podStartSLOduration=3.341935034 podStartE2EDuration="3.341935034s" podCreationTimestamp="2026-02-16 21:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:40.318991104 +0000 UTC m=+890.670994466" watchObservedRunningTime="2026-02-16 21:37:40.341935034 +0000 UTC m=+890.693938396" Feb 16 21:37:40.364583 master-0 kubenswrapper[38936]: I0216 21:37:40.356637 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-95b8b778-clhph" podStartSLOduration=3.35661357 podStartE2EDuration="3.35661357s" podCreationTimestamp="2026-02-16 21:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:40.347153154 +0000 UTC m=+890.699156526" watchObservedRunningTime="2026-02-16 21:37:40.35661357 +0000 UTC m=+890.708616932" Feb 16 21:37:41.295063 master-0 kubenswrapper[38936]: I0216 21:37:41.294988 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-95b8b778-clhph" Feb 16 21:37:44.326823 master-0 kubenswrapper[38936]: I0216 21:37:44.326595 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-nzcsn" event={"ID":"5b1ea749-0e13-47db-bd37-4f269f872a0b","Type":"ContainerStarted","Data":"71309986699e8d944d3ba16db4ba84da61f3cdee13e24b2d158b46d770092237"} Feb 16 21:37:45.338245 master-0 kubenswrapper[38936]: I0216 21:37:45.338182 38936 generic.go:334] "Generic (PLEG): container finished" podID="5b1ea749-0e13-47db-bd37-4f269f872a0b" containerID="71309986699e8d944d3ba16db4ba84da61f3cdee13e24b2d158b46d770092237" exitCode=0 Feb 16 21:37:45.338834 master-0 kubenswrapper[38936]: I0216 21:37:45.338245 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-nzcsn" event={"ID":"5b1ea749-0e13-47db-bd37-4f269f872a0b","Type":"ContainerDied","Data":"71309986699e8d944d3ba16db4ba84da61f3cdee13e24b2d158b46d770092237"} Feb 16 21:37:46.352450 master-0 kubenswrapper[38936]: I0216 21:37:46.352380 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-nzcsn" event={"ID":"5b1ea749-0e13-47db-bd37-4f269f872a0b","Type":"ContainerStarted","Data":"c1877eb7455255efbc803c552bf739892007e1d5651f37af1c6bdddd3a9edd33"} Feb 16 21:37:46.383043 master-0 kubenswrapper[38936]: I0216 21:37:46.382922 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-nzcsn" podStartSLOduration=11.253159921 podStartE2EDuration="19.382862713s" podCreationTimestamp="2026-02-16 21:37:27 +0000 UTC" firstStartedPulling="2026-02-16 21:37:35.929332439 +0000 UTC m=+886.281335801" lastFinishedPulling="2026-02-16 21:37:44.059035221 +0000 UTC m=+894.411038593" observedRunningTime="2026-02-16 21:37:46.373418048 +0000 UTC m=+896.725421430" watchObservedRunningTime="2026-02-16 21:37:46.382862713 +0000 UTC m=+896.734866065" Feb 16 21:37:49.392699 master-0 kubenswrapper[38936]: I0216 21:37:49.392222 38936 generic.go:334] "Generic (PLEG): container finished" podID="a7c56a35-a711-40a4-9428-031faf014af4" containerID="08a59bdb5d4aefa117bebbfc965bdee02d689fff50e2eea2a412b937e493f40f" exitCode=0 Feb 16 21:37:49.392699 master-0 kubenswrapper[38936]: I0216 21:37:49.392290 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-db-sync-r9pqq" event={"ID":"a7c56a35-a711-40a4-9428-031faf014af4","Type":"ContainerDied","Data":"08a59bdb5d4aefa117bebbfc965bdee02d689fff50e2eea2a412b937e493f40f"} Feb 16 21:37:50.878413 master-0 kubenswrapper[38936]: I0216 21:37:50.878356 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:51.060573 master-0 kubenswrapper[38936]: I0216 21:37:51.060488 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-config-data\") pod \"a7c56a35-a711-40a4-9428-031faf014af4\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " Feb 16 21:37:51.060825 master-0 kubenswrapper[38936]: I0216 21:37:51.060810 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-db-sync-config-data\") pod \"a7c56a35-a711-40a4-9428-031faf014af4\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " Feb 16 21:37:51.060906 master-0 kubenswrapper[38936]: I0216 21:37:51.060871 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-combined-ca-bundle\") pod \"a7c56a35-a711-40a4-9428-031faf014af4\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " Feb 16 21:37:51.060945 master-0 kubenswrapper[38936]: I0216 21:37:51.060931 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a7c56a35-a711-40a4-9428-031faf014af4-etc-machine-id\") pod \"a7c56a35-a711-40a4-9428-031faf014af4\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " Feb 16 21:37:51.061320 master-0 kubenswrapper[38936]: I0216 21:37:51.061070 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-scripts\") pod \"a7c56a35-a711-40a4-9428-031faf014af4\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " Feb 16 21:37:51.061320 master-0 kubenswrapper[38936]: I0216 21:37:51.061095 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7c56a35-a711-40a4-9428-031faf014af4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a7c56a35-a711-40a4-9428-031faf014af4" (UID: "a7c56a35-a711-40a4-9428-031faf014af4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:37:51.061320 master-0 kubenswrapper[38936]: I0216 21:37:51.061164 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvrnj\" (UniqueName: \"kubernetes.io/projected/a7c56a35-a711-40a4-9428-031faf014af4-kube-api-access-lvrnj\") pod \"a7c56a35-a711-40a4-9428-031faf014af4\" (UID: \"a7c56a35-a711-40a4-9428-031faf014af4\") " Feb 16 21:37:51.062306 master-0 kubenswrapper[38936]: I0216 21:37:51.062260 38936 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a7c56a35-a711-40a4-9428-031faf014af4-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:51.064498 master-0 kubenswrapper[38936]: I0216 21:37:51.064456 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-scripts" (OuterVolumeSpecName: "scripts") pod "a7c56a35-a711-40a4-9428-031faf014af4" (UID: "a7c56a35-a711-40a4-9428-031faf014af4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:51.065296 master-0 kubenswrapper[38936]: I0216 21:37:51.065232 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a7c56a35-a711-40a4-9428-031faf014af4" (UID: "a7c56a35-a711-40a4-9428-031faf014af4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:51.065453 master-0 kubenswrapper[38936]: I0216 21:37:51.065338 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c56a35-a711-40a4-9428-031faf014af4-kube-api-access-lvrnj" (OuterVolumeSpecName: "kube-api-access-lvrnj") pod "a7c56a35-a711-40a4-9428-031faf014af4" (UID: "a7c56a35-a711-40a4-9428-031faf014af4"). InnerVolumeSpecName "kube-api-access-lvrnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:51.088514 master-0 kubenswrapper[38936]: I0216 21:37:51.088454 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7c56a35-a711-40a4-9428-031faf014af4" (UID: "a7c56a35-a711-40a4-9428-031faf014af4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:51.114226 master-0 kubenswrapper[38936]: I0216 21:37:51.114170 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-config-data" (OuterVolumeSpecName: "config-data") pod "a7c56a35-a711-40a4-9428-031faf014af4" (UID: "a7c56a35-a711-40a4-9428-031faf014af4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:51.164882 master-0 kubenswrapper[38936]: I0216 21:37:51.164821 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:51.164882 master-0 kubenswrapper[38936]: I0216 21:37:51.164872 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvrnj\" (UniqueName: \"kubernetes.io/projected/a7c56a35-a711-40a4-9428-031faf014af4-kube-api-access-lvrnj\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:51.164882 master-0 kubenswrapper[38936]: I0216 21:37:51.164888 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:51.165154 master-0 kubenswrapper[38936]: I0216 21:37:51.164901 38936 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:51.165154 master-0 kubenswrapper[38936]: I0216 21:37:51.164916 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c56a35-a711-40a4-9428-031faf014af4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:51.416167 master-0 kubenswrapper[38936]: I0216 21:37:51.416026 38936 generic.go:334] "Generic (PLEG): container finished" podID="ae3f7123-0f56-47f9-afdb-cc6bff73ecd3" containerID="8f1e32c71f9fe7c0f457db72104d2cdf117833a851a8986e227468c0679f9099" exitCode=0 Feb 16 21:37:51.416382 master-0 kubenswrapper[38936]: I0216 21:37:51.416135 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-znszx" event={"ID":"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3","Type":"ContainerDied","Data":"8f1e32c71f9fe7c0f457db72104d2cdf117833a851a8986e227468c0679f9099"} Feb 16 21:37:51.417701 master-0 kubenswrapper[38936]: I0216 21:37:51.417664 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-db-sync-r9pqq" event={"ID":"a7c56a35-a711-40a4-9428-031faf014af4","Type":"ContainerDied","Data":"a3e4ca9fe6da125f8f9f0a76148c6cd69efad98ef78b306871d6a256f2deb710"} Feb 16 21:37:51.417701 master-0 kubenswrapper[38936]: I0216 21:37:51.417691 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e4ca9fe6da125f8f9f0a76148c6cd69efad98ef78b306871d6a256f2deb710" Feb 16 21:37:51.417840 master-0 kubenswrapper[38936]: I0216 21:37:51.417737 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-db-sync-r9pqq" Feb 16 21:37:51.958613 master-0 kubenswrapper[38936]: I0216 21:37:51.958540 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9c692-volume-lvm-iscsi-0"] Feb 16 21:37:51.960790 master-0 kubenswrapper[38936]: E0216 21:37:51.960762 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c56a35-a711-40a4-9428-031faf014af4" containerName="cinder-9c692-db-sync" Feb 16 21:37:51.960888 master-0 kubenswrapper[38936]: I0216 21:37:51.960876 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c56a35-a711-40a4-9428-031faf014af4" containerName="cinder-9c692-db-sync" Feb 16 21:37:51.961612 master-0 kubenswrapper[38936]: I0216 21:37:51.961593 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c56a35-a711-40a4-9428-031faf014af4" containerName="cinder-9c692-db-sync" Feb 16 21:37:51.963967 master-0 kubenswrapper[38936]: I0216 21:37:51.963938 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:51.967986 master-0 kubenswrapper[38936]: I0216 21:37:51.967497 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-scripts" Feb 16 21:37:51.967986 master-0 kubenswrapper[38936]: I0216 21:37:51.967796 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-volume-lvm-iscsi-config-data" Feb 16 21:37:51.967986 master-0 kubenswrapper[38936]: I0216 21:37:51.967932 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-config-data" Feb 16 21:37:51.982240 master-0 kubenswrapper[38936]: I0216 21:37:51.982176 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9c692-scheduler-0"] Feb 16 21:37:52.039924 master-0 kubenswrapper[38936]: I0216 21:37:52.039837 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.064074 master-0 kubenswrapper[38936]: I0216 21:37:52.063027 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-scheduler-config-data" Feb 16 21:37:52.109174 master-0 kubenswrapper[38936]: I0216 21:37:52.108699 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-sys\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109174 master-0 kubenswrapper[38936]: I0216 21:37:52.109182 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data-custom\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109641 master-0 kubenswrapper[38936]: I0216 21:37:52.109263 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7snx8\" (UniqueName: \"kubernetes.io/projected/9037d3ef-953a-4af9-9c81-d94587ee2d9d-kube-api-access-7snx8\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109641 master-0 kubenswrapper[38936]: I0216 21:37:52.109296 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-lib-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109641 master-0 kubenswrapper[38936]: I0216 21:37:52.109319 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-scripts\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109641 master-0 kubenswrapper[38936]: I0216 21:37:52.109352 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-brick\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109641 master-0 kubenswrapper[38936]: I0216 21:37:52.109380 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-machine-id\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109641 master-0 kubenswrapper[38936]: I0216 21:37:52.109415 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109641 master-0 kubenswrapper[38936]: I0216 21:37:52.109451 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-lib-modules\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109641 master-0 kubenswrapper[38936]: I0216 21:37:52.109518 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-nvme\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109641 master-0 kubenswrapper[38936]: I0216 21:37:52.109538 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-iscsi\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109641 master-0 kubenswrapper[38936]: I0216 21:37:52.109610 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.109641 master-0 kubenswrapper[38936]: I0216 21:37:52.109647 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-run\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.110210 master-0 kubenswrapper[38936]: I0216 21:37:52.109720 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-combined-ca-bundle\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.110210 master-0 kubenswrapper[38936]: I0216 21:37:52.109782 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-dev\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.251802 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.251886 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data-custom\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.251924 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.251956 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-run\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.251978 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-combined-ca-bundle\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252023 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-dev\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252053 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxq7p\" (UniqueName: \"kubernetes.io/projected/f85d31c9-7303-4e30-ba85-3362b5828482-kube-api-access-kxq7p\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252075 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85d31c9-7303-4e30-ba85-3362b5828482-etc-machine-id\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252102 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-sys\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252122 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data-custom\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252152 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7snx8\" (UniqueName: \"kubernetes.io/projected/9037d3ef-953a-4af9-9c81-d94587ee2d9d-kube-api-access-7snx8\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252170 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-lib-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252190 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-scripts\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252213 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-scripts\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252242 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-dev\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252271 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-sys\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252282 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-brick\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252307 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-machine-id\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252330 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252358 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-lib-modules\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252383 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-combined-ca-bundle\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252434 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-nvme\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252452 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-iscsi\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252523 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-iscsi\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252794 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-brick\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252879 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252911 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-lib-modules\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.252983 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-nvme\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.253066 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-lib-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.253822 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-run\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.255503 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-machine-id\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.259684 master-0 kubenswrapper[38936]: I0216 21:37:52.257107 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.281691 master-0 kubenswrapper[38936]: I0216 21:37:52.275505 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data-custom\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.300682 master-0 kubenswrapper[38936]: I0216 21:37:52.285121 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-scripts\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.300682 master-0 kubenswrapper[38936]: I0216 21:37:52.299485 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-combined-ca-bundle\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.300682 master-0 kubenswrapper[38936]: I0216 21:37:52.300054 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7snx8\" (UniqueName: \"kubernetes.io/projected/9037d3ef-953a-4af9-9c81-d94587ee2d9d-kube-api-access-7snx8\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.300682 master-0 kubenswrapper[38936]: I0216 21:37:52.300621 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:52.318699 master-0 kubenswrapper[38936]: I0216 21:37:52.311737 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-scheduler-0"] Feb 16 21:37:52.349698 master-0 kubenswrapper[38936]: I0216 21:37:52.348877 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-volume-lvm-iscsi-0"] Feb 16 21:37:52.355555 master-0 kubenswrapper[38936]: I0216 21:37:52.355483 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxq7p\" (UniqueName: \"kubernetes.io/projected/f85d31c9-7303-4e30-ba85-3362b5828482-kube-api-access-kxq7p\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.355555 master-0 kubenswrapper[38936]: I0216 21:37:52.355548 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85d31c9-7303-4e30-ba85-3362b5828482-etc-machine-id\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.355778 master-0 kubenswrapper[38936]: I0216 21:37:52.355599 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-scripts\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.355778 master-0 kubenswrapper[38936]: I0216 21:37:52.355642 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-combined-ca-bundle\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.355778 master-0 kubenswrapper[38936]: I0216 21:37:52.355732 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.355778 master-0 kubenswrapper[38936]: I0216 21:37:52.355759 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data-custom\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.366227 master-0 kubenswrapper[38936]: I0216 21:37:52.363340 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-combined-ca-bundle\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.366227 master-0 kubenswrapper[38936]: I0216 21:37:52.363424 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85d31c9-7303-4e30-ba85-3362b5828482-etc-machine-id\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.368605 master-0 kubenswrapper[38936]: I0216 21:37:52.368572 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.375231 master-0 kubenswrapper[38936]: I0216 21:37:52.375184 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-scripts\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.377119 master-0 kubenswrapper[38936]: I0216 21:37:52.377084 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxq7p\" (UniqueName: \"kubernetes.io/projected/f85d31c9-7303-4e30-ba85-3362b5828482-kube-api-access-kxq7p\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.377197 master-0 kubenswrapper[38936]: I0216 21:37:52.377118 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data-custom\") pod \"cinder-9c692-scheduler-0\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.390149 master-0 kubenswrapper[38936]: I0216 21:37:52.389759 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9c692-backup-0"] Feb 16 21:37:52.394070 master-0 kubenswrapper[38936]: I0216 21:37:52.394019 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.399792 master-0 kubenswrapper[38936]: I0216 21:37:52.398259 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-backup-config-data" Feb 16 21:37:52.446075 master-0 kubenswrapper[38936]: I0216 21:37:52.443551 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c5d486cff-t8lst"] Feb 16 21:37:52.452101 master-0 kubenswrapper[38936]: I0216 21:37:52.450531 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.458320 master-0 kubenswrapper[38936]: I0216 21:37:52.458024 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.458320 master-0 kubenswrapper[38936]: I0216 21:37:52.458094 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-svc\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.458320 master-0 kubenswrapper[38936]: I0216 21:37:52.458147 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-combined-ca-bundle\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458320 master-0 kubenswrapper[38936]: I0216 21:37:52.458178 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-nvme\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458320 master-0 kubenswrapper[38936]: I0216 21:37:52.458194 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-iscsi\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458320 master-0 kubenswrapper[38936]: I0216 21:37:52.458210 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458320 master-0 kubenswrapper[38936]: I0216 21:37:52.458240 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-run\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458320 master-0 kubenswrapper[38936]: I0216 21:37:52.458261 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-dev\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458320 master-0 kubenswrapper[38936]: I0216 21:37:52.458295 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458346 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458380 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-scripts\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458420 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-brick\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458441 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data-custom\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458468 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khlsz\" (UniqueName: \"kubernetes.io/projected/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-kube-api-access-khlsz\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458489 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbrjb\" (UniqueName: \"kubernetes.io/projected/5d30d26b-c68c-4ad5-b006-47338242fc62-kube-api-access-zbrjb\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458519 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-sys\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458543 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-lib-modules\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458559 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-config\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458582 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458602 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-machine-id\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.458922 master-0 kubenswrapper[38936]: I0216 21:37:52.458619 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-lib-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.460536 master-0 kubenswrapper[38936]: I0216 21:37:52.460427 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:37:52.465839 master-0 kubenswrapper[38936]: I0216 21:37:52.465741 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-backup-0"] Feb 16 21:37:52.547729 master-0 kubenswrapper[38936]: I0216 21:37:52.543278 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c5d486cff-t8lst"] Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.560694 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.560765 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-scripts\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.560831 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-brick\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.560857 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data-custom\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.560886 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khlsz\" (UniqueName: \"kubernetes.io/projected/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-kube-api-access-khlsz\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.560922 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbrjb\" (UniqueName: \"kubernetes.io/projected/5d30d26b-c68c-4ad5-b006-47338242fc62-kube-api-access-zbrjb\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.560959 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-sys\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.560996 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-lib-modules\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561024 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-config\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561060 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561098 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-machine-id\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561126 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-lib-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561173 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561202 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-svc\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561258 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-combined-ca-bundle\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561290 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-nvme\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561312 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-iscsi\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561337 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561386 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-run\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561415 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-dev\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.561485 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.562581 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.562681 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-lib-modules\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.562719 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-lib-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.563958 master-0 kubenswrapper[38936]: I0216 21:37:52.563765 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.566856 master-0 kubenswrapper[38936]: I0216 21:37:52.566817 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.567183 master-0 kubenswrapper[38936]: I0216 21:37:52.567143 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-config\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.567424 master-0 kubenswrapper[38936]: I0216 21:37:52.567378 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-combined-ca-bundle\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.567517 master-0 kubenswrapper[38936]: I0216 21:37:52.567494 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-run\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.567577 master-0 kubenswrapper[38936]: I0216 21:37:52.567521 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-dev\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.567876 master-0 kubenswrapper[38936]: I0216 21:37:52.567544 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-nvme\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.567876 master-0 kubenswrapper[38936]: I0216 21:37:52.567570 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.567876 master-0 kubenswrapper[38936]: I0216 21:37:52.567581 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-iscsi\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.568441 master-0 kubenswrapper[38936]: I0216 21:37:52.568408 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-brick\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.568631 master-0 kubenswrapper[38936]: I0216 21:37:52.568593 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-machine-id\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.569057 master-0 kubenswrapper[38936]: I0216 21:37:52.569021 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-sys\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.569703 master-0 kubenswrapper[38936]: I0216 21:37:52.569635 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.570060 master-0 kubenswrapper[38936]: I0216 21:37:52.570024 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-svc\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.571292 master-0 kubenswrapper[38936]: I0216 21:37:52.571111 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data-custom\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.575154 master-0 kubenswrapper[38936]: I0216 21:37:52.575109 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-scripts\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.576586 master-0 kubenswrapper[38936]: I0216 21:37:52.576537 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9c692-api-0"] Feb 16 21:37:52.578713 master-0 kubenswrapper[38936]: I0216 21:37:52.578623 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.582391 master-0 kubenswrapper[38936]: I0216 21:37:52.582328 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-api-config-data" Feb 16 21:37:52.601960 master-0 kubenswrapper[38936]: I0216 21:37:52.586633 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khlsz\" (UniqueName: \"kubernetes.io/projected/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-kube-api-access-khlsz\") pod \"cinder-9c692-backup-0\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.601960 master-0 kubenswrapper[38936]: I0216 21:37:52.590260 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbrjb\" (UniqueName: \"kubernetes.io/projected/5d30d26b-c68c-4ad5-b006-47338242fc62-kube-api-access-zbrjb\") pod \"dnsmasq-dns-7c5d486cff-t8lst\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.601960 master-0 kubenswrapper[38936]: I0216 21:37:52.600131 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-api-0"] Feb 16 21:37:52.774974 master-0 kubenswrapper[38936]: I0216 21:37:52.774902 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data-custom\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.775203 master-0 kubenswrapper[38936]: I0216 21:37:52.775001 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-scripts\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.775203 master-0 kubenswrapper[38936]: I0216 21:37:52.775040 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.775203 master-0 kubenswrapper[38936]: I0216 21:37:52.775066 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/185cbfbd-402e-4012-9c97-0a8f3a579e74-etc-machine-id\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.775203 master-0 kubenswrapper[38936]: I0216 21:37:52.775086 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-combined-ca-bundle\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.775203 master-0 kubenswrapper[38936]: I0216 21:37:52.775118 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/185cbfbd-402e-4012-9c97-0a8f3a579e74-logs\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.775203 master-0 kubenswrapper[38936]: I0216 21:37:52.775165 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7nfw\" (UniqueName: \"kubernetes.io/projected/185cbfbd-402e-4012-9c97-0a8f3a579e74-kube-api-access-g7nfw\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.874007 master-0 kubenswrapper[38936]: I0216 21:37:52.873793 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:52.881995 master-0 kubenswrapper[38936]: I0216 21:37:52.881486 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data-custom\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.881995 master-0 kubenswrapper[38936]: I0216 21:37:52.881569 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-scripts\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.881995 master-0 kubenswrapper[38936]: I0216 21:37:52.881596 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.881995 master-0 kubenswrapper[38936]: I0216 21:37:52.881630 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/185cbfbd-402e-4012-9c97-0a8f3a579e74-etc-machine-id\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.881995 master-0 kubenswrapper[38936]: I0216 21:37:52.881660 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-combined-ca-bundle\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.881995 master-0 kubenswrapper[38936]: I0216 21:37:52.881690 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/185cbfbd-402e-4012-9c97-0a8f3a579e74-logs\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.881995 master-0 kubenswrapper[38936]: I0216 21:37:52.881735 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7nfw\" (UniqueName: \"kubernetes.io/projected/185cbfbd-402e-4012-9c97-0a8f3a579e74-kube-api-access-g7nfw\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.883261 master-0 kubenswrapper[38936]: I0216 21:37:52.882826 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/185cbfbd-402e-4012-9c97-0a8f3a579e74-etc-machine-id\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.885158 master-0 kubenswrapper[38936]: I0216 21:37:52.885058 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/185cbfbd-402e-4012-9c97-0a8f3a579e74-logs\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.892230 master-0 kubenswrapper[38936]: I0216 21:37:52.887562 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-combined-ca-bundle\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.892230 master-0 kubenswrapper[38936]: I0216 21:37:52.888531 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.892230 master-0 kubenswrapper[38936]: I0216 21:37:52.890240 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data-custom\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.892230 master-0 kubenswrapper[38936]: I0216 21:37:52.890802 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:52.915809 master-0 kubenswrapper[38936]: I0216 21:37:52.915274 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-scripts\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.925175 master-0 kubenswrapper[38936]: I0216 21:37:52.925122 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7nfw\" (UniqueName: \"kubernetes.io/projected/185cbfbd-402e-4012-9c97-0a8f3a579e74-kube-api-access-g7nfw\") pod \"cinder-9c692-api-0\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.949481 master-0 kubenswrapper[38936]: I0216 21:37:52.926818 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-api-0" Feb 16 21:37:52.949481 master-0 kubenswrapper[38936]: I0216 21:37:52.937856 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-volume-lvm-iscsi-0"] Feb 16 21:37:52.992059 master-0 kubenswrapper[38936]: W0216 21:37:52.990983 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9037d3ef_953a_4af9_9c81_d94587ee2d9d.slice/crio-abfd1e0e907ef04369a8a7966dc5afff24f887cc8c61e8429c90f4a11887f4af WatchSource:0}: Error finding container abfd1e0e907ef04369a8a7966dc5afff24f887cc8c61e8429c90f4a11887f4af: Status 404 returned error can't find the container with id abfd1e0e907ef04369a8a7966dc5afff24f887cc8c61e8429c90f4a11887f4af Feb 16 21:37:53.037515 master-0 kubenswrapper[38936]: I0216 21:37:53.037459 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:53.192395 master-0 kubenswrapper[38936]: I0216 21:37:53.192305 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-config\") pod \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " Feb 16 21:37:53.192483 master-0 kubenswrapper[38936]: I0216 21:37:53.192425 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-combined-ca-bundle\") pod \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " Feb 16 21:37:53.192626 master-0 kubenswrapper[38936]: I0216 21:37:53.192601 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptd55\" (UniqueName: \"kubernetes.io/projected/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-kube-api-access-ptd55\") pod \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\" (UID: \"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3\") " Feb 16 21:37:53.202117 master-0 kubenswrapper[38936]: I0216 21:37:53.202054 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-kube-api-access-ptd55" (OuterVolumeSpecName: "kube-api-access-ptd55") pod "ae3f7123-0f56-47f9-afdb-cc6bff73ecd3" (UID: "ae3f7123-0f56-47f9-afdb-cc6bff73ecd3"). InnerVolumeSpecName "kube-api-access-ptd55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:53.236024 master-0 kubenswrapper[38936]: I0216 21:37:53.234476 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-scheduler-0"] Feb 16 21:37:53.238266 master-0 kubenswrapper[38936]: I0216 21:37:53.237866 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-config" (OuterVolumeSpecName: "config") pod "ae3f7123-0f56-47f9-afdb-cc6bff73ecd3" (UID: "ae3f7123-0f56-47f9-afdb-cc6bff73ecd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:53.260857 master-0 kubenswrapper[38936]: I0216 21:37:53.241015 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae3f7123-0f56-47f9-afdb-cc6bff73ecd3" (UID: "ae3f7123-0f56-47f9-afdb-cc6bff73ecd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:37:53.260857 master-0 kubenswrapper[38936]: W0216 21:37:53.241530 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85d31c9_7303_4e30_ba85_3362b5828482.slice/crio-f293490ada0dee53010f28c8d3e54f9dfb1b428fddbb3ea0ae23abff0ed6e021 WatchSource:0}: Error finding container f293490ada0dee53010f28c8d3e54f9dfb1b428fddbb3ea0ae23abff0ed6e021: Status 404 returned error can't find the container with id f293490ada0dee53010f28c8d3e54f9dfb1b428fddbb3ea0ae23abff0ed6e021 Feb 16 21:37:53.303132 master-0 kubenswrapper[38936]: I0216 21:37:53.300934 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:53.303132 master-0 kubenswrapper[38936]: I0216 21:37:53.300993 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:53.303132 master-0 kubenswrapper[38936]: I0216 21:37:53.301008 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptd55\" (UniqueName: \"kubernetes.io/projected/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3-kube-api-access-ptd55\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:53.458715 master-0 kubenswrapper[38936]: I0216 21:37:53.458632 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-scheduler-0" event={"ID":"f85d31c9-7303-4e30-ba85-3362b5828482","Type":"ContainerStarted","Data":"f293490ada0dee53010f28c8d3e54f9dfb1b428fddbb3ea0ae23abff0ed6e021"} Feb 16 21:37:53.462884 master-0 kubenswrapper[38936]: I0216 21:37:53.462833 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" event={"ID":"9037d3ef-953a-4af9-9c81-d94587ee2d9d","Type":"ContainerStarted","Data":"abfd1e0e907ef04369a8a7966dc5afff24f887cc8c61e8429c90f4a11887f4af"} Feb 16 21:37:53.473799 master-0 kubenswrapper[38936]: I0216 21:37:53.473618 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-znszx" event={"ID":"ae3f7123-0f56-47f9-afdb-cc6bff73ecd3","Type":"ContainerDied","Data":"c554b4ca14823822942e1efee12e1b10c29526b8263d670acdf67079c36beeb7"} Feb 16 21:37:53.474168 master-0 kubenswrapper[38936]: I0216 21:37:53.474146 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c554b4ca14823822942e1efee12e1b10c29526b8263d670acdf67079c36beeb7" Feb 16 21:37:53.474273 master-0 kubenswrapper[38936]: I0216 21:37:53.474111 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-znszx" Feb 16 21:37:53.649951 master-0 kubenswrapper[38936]: I0216 21:37:53.648829 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c5d486cff-t8lst"] Feb 16 21:37:53.850760 master-0 kubenswrapper[38936]: I0216 21:37:53.846291 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-backup-0"] Feb 16 21:37:53.964485 master-0 kubenswrapper[38936]: I0216 21:37:53.955855 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-api-0"] Feb 16 21:37:53.964485 master-0 kubenswrapper[38936]: I0216 21:37:53.955926 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c5d486cff-t8lst"] Feb 16 21:37:53.964485 master-0 kubenswrapper[38936]: I0216 21:37:53.955943 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b95d794ff-8msjt"] Feb 16 21:37:53.964485 master-0 kubenswrapper[38936]: E0216 21:37:53.956427 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3f7123-0f56-47f9-afdb-cc6bff73ecd3" containerName="neutron-db-sync" Feb 16 21:37:53.964485 master-0 kubenswrapper[38936]: I0216 21:37:53.956444 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3f7123-0f56-47f9-afdb-cc6bff73ecd3" containerName="neutron-db-sync" Feb 16 21:37:53.964485 master-0 kubenswrapper[38936]: I0216 21:37:53.956898 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3f7123-0f56-47f9-afdb-cc6bff73ecd3" containerName="neutron-db-sync" Feb 16 21:37:53.964485 master-0 kubenswrapper[38936]: I0216 21:37:53.958525 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:53.984661 master-0 kubenswrapper[38936]: I0216 21:37:53.970599 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b95d794ff-8msjt"] Feb 16 21:37:54.003743 master-0 kubenswrapper[38936]: I0216 21:37:53.992792 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64949f9d84-p7hqz"] Feb 16 21:37:54.003743 master-0 kubenswrapper[38936]: I0216 21:37:53.995437 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.018583 master-0 kubenswrapper[38936]: I0216 21:37:54.018501 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64949f9d84-p7hqz"] Feb 16 21:37:54.025901 master-0 kubenswrapper[38936]: I0216 21:37:54.025852 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 21:37:54.026271 master-0 kubenswrapper[38936]: I0216 21:37:54.026146 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 21:37:54.033459 master-0 kubenswrapper[38936]: I0216 21:37:54.027100 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 21:37:54.060845 master-0 kubenswrapper[38936]: I0216 21:37:54.060770 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c65zf\" (UniqueName: \"kubernetes.io/projected/b584e233-74f9-47f5-99e2-2fa42826ac27-kube-api-access-c65zf\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.061117 master-0 kubenswrapper[38936]: I0216 21:37:54.060878 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-config\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.061117 master-0 kubenswrapper[38936]: I0216 21:37:54.060902 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-nb\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.061117 master-0 kubenswrapper[38936]: I0216 21:37:54.061025 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-sb\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.061117 master-0 kubenswrapper[38936]: I0216 21:37:54.061096 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-swift-storage-0\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.061248 master-0 kubenswrapper[38936]: I0216 21:37:54.061134 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-svc\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.172103 master-0 kubenswrapper[38936]: I0216 21:37:54.171889 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-sb\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.172103 master-0 kubenswrapper[38936]: I0216 21:37:54.172061 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-combined-ca-bundle\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.172344 master-0 kubenswrapper[38936]: I0216 21:37:54.172132 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-swift-storage-0\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.172344 master-0 kubenswrapper[38936]: I0216 21:37:54.172158 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-config\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.172344 master-0 kubenswrapper[38936]: I0216 21:37:54.172203 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-svc\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.173195 master-0 kubenswrapper[38936]: I0216 21:37:54.173139 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-sb\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.173872 master-0 kubenswrapper[38936]: I0216 21:37:54.173810 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-swift-storage-0\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.174545 master-0 kubenswrapper[38936]: I0216 21:37:54.174104 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c65zf\" (UniqueName: \"kubernetes.io/projected/b584e233-74f9-47f5-99e2-2fa42826ac27-kube-api-access-c65zf\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.174545 master-0 kubenswrapper[38936]: I0216 21:37:54.174183 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-ovndb-tls-certs\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.174545 master-0 kubenswrapper[38936]: I0216 21:37:54.174207 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxps\" (UniqueName: \"kubernetes.io/projected/6d470e92-7826-4314-9ecb-7b37cd11b8e2-kube-api-access-sjxps\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.174545 master-0 kubenswrapper[38936]: I0216 21:37:54.174294 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-config\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.174545 master-0 kubenswrapper[38936]: I0216 21:37:54.174317 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-nb\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.174545 master-0 kubenswrapper[38936]: I0216 21:37:54.174358 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-httpd-config\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.174545 master-0 kubenswrapper[38936]: I0216 21:37:54.174417 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-svc\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.175203 master-0 kubenswrapper[38936]: I0216 21:37:54.175040 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-config\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.187916 master-0 kubenswrapper[38936]: I0216 21:37:54.179833 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-nb\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.195236 master-0 kubenswrapper[38936]: I0216 21:37:54.194981 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c65zf\" (UniqueName: \"kubernetes.io/projected/b584e233-74f9-47f5-99e2-2fa42826ac27-kube-api-access-c65zf\") pod \"dnsmasq-dns-b95d794ff-8msjt\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.281017 master-0 kubenswrapper[38936]: I0216 21:37:54.280644 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-ovndb-tls-certs\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.281017 master-0 kubenswrapper[38936]: I0216 21:37:54.280773 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjxps\" (UniqueName: \"kubernetes.io/projected/6d470e92-7826-4314-9ecb-7b37cd11b8e2-kube-api-access-sjxps\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.281017 master-0 kubenswrapper[38936]: I0216 21:37:54.280855 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-httpd-config\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.281017 master-0 kubenswrapper[38936]: I0216 21:37:54.280958 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-combined-ca-bundle\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.281308 master-0 kubenswrapper[38936]: I0216 21:37:54.281162 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-config\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.286310 master-0 kubenswrapper[38936]: I0216 21:37:54.286228 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:54.288071 master-0 kubenswrapper[38936]: I0216 21:37:54.287278 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-combined-ca-bundle\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.288071 master-0 kubenswrapper[38936]: I0216 21:37:54.287973 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-config\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.298069 master-0 kubenswrapper[38936]: I0216 21:37:54.297994 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-ovndb-tls-certs\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.300196 master-0 kubenswrapper[38936]: I0216 21:37:54.298754 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-httpd-config\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.314405 master-0 kubenswrapper[38936]: I0216 21:37:54.314337 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjxps\" (UniqueName: \"kubernetes.io/projected/6d470e92-7826-4314-9ecb-7b37cd11b8e2-kube-api-access-sjxps\") pod \"neutron-64949f9d84-p7hqz\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.342244 master-0 kubenswrapper[38936]: I0216 21:37:54.342182 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:54.497114 master-0 kubenswrapper[38936]: I0216 21:37:54.496063 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-api-0" event={"ID":"185cbfbd-402e-4012-9c97-0a8f3a579e74","Type":"ContainerStarted","Data":"40353b6c5450546e487346f9c132c9e9c4cf6ab9a1e9d28af68dacc99cfc106e"} Feb 16 21:37:54.499166 master-0 kubenswrapper[38936]: I0216 21:37:54.498465 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-backup-0" event={"ID":"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c","Type":"ContainerStarted","Data":"ac1875bcb6ea2772142ff3144a05f3ee6bf1e33aaca6ed1e3b7abfd2ae857265"} Feb 16 21:37:54.500868 master-0 kubenswrapper[38936]: I0216 21:37:54.500831 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" event={"ID":"5d30d26b-c68c-4ad5-b006-47338242fc62","Type":"ContainerStarted","Data":"dc85ccaa1d5620916efb4c72038161f760b970ccef6313205314280c926681de"} Feb 16 21:37:54.850609 master-0 kubenswrapper[38936]: I0216 21:37:54.850534 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b95d794ff-8msjt"] Feb 16 21:37:55.372030 master-0 kubenswrapper[38936]: I0216 21:37:55.371486 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9c692-api-0"] Feb 16 21:37:55.462390 master-0 kubenswrapper[38936]: I0216 21:37:55.461971 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64949f9d84-p7hqz"] Feb 16 21:37:55.521739 master-0 kubenswrapper[38936]: I0216 21:37:55.521557 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" event={"ID":"9037d3ef-953a-4af9-9c81-d94587ee2d9d","Type":"ContainerStarted","Data":"de58beeba240ff51ab416be9d35eb6bbffc3258b096e90d2d4f9ebb61a7b8240"} Feb 16 21:37:55.527488 master-0 kubenswrapper[38936]: I0216 21:37:55.527414 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-backup-0" event={"ID":"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c","Type":"ContainerStarted","Data":"d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b"} Feb 16 21:37:55.538205 master-0 kubenswrapper[38936]: I0216 21:37:55.534547 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64949f9d84-p7hqz" event={"ID":"6d470e92-7826-4314-9ecb-7b37cd11b8e2","Type":"ContainerStarted","Data":"675dda40a55f9fc235aacae4000beaf4e40c754cde0d54240c4ce0756b13ca6c"} Feb 16 21:37:55.543375 master-0 kubenswrapper[38936]: I0216 21:37:55.543312 38936 generic.go:334] "Generic (PLEG): container finished" podID="5d30d26b-c68c-4ad5-b006-47338242fc62" containerID="43b83a15c0cd8c8684d48bb8cb54e02a6a9238baad8ab7bb57dec2c75a799b6c" exitCode=0 Feb 16 21:37:55.543673 master-0 kubenswrapper[38936]: I0216 21:37:55.543596 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" event={"ID":"5d30d26b-c68c-4ad5-b006-47338242fc62","Type":"ContainerDied","Data":"43b83a15c0cd8c8684d48bb8cb54e02a6a9238baad8ab7bb57dec2c75a799b6c"} Feb 16 21:37:55.559313 master-0 kubenswrapper[38936]: I0216 21:37:55.559241 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-api-0" event={"ID":"185cbfbd-402e-4012-9c97-0a8f3a579e74","Type":"ContainerStarted","Data":"d0d0184874ecb5fa7b62272d79af51318cdad8380341f19c4a14140dfab50e9f"} Feb 16 21:37:55.600361 master-0 kubenswrapper[38936]: I0216 21:37:55.600179 38936 generic.go:334] "Generic (PLEG): container finished" podID="b584e233-74f9-47f5-99e2-2fa42826ac27" containerID="f5796dccbe40452a611853e55738018ab5202663551ff1db3862d38e0617ed50" exitCode=0 Feb 16 21:37:55.600361 master-0 kubenswrapper[38936]: I0216 21:37:55.600270 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" event={"ID":"b584e233-74f9-47f5-99e2-2fa42826ac27","Type":"ContainerDied","Data":"f5796dccbe40452a611853e55738018ab5202663551ff1db3862d38e0617ed50"} Feb 16 21:37:55.600361 master-0 kubenswrapper[38936]: I0216 21:37:55.600306 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" event={"ID":"b584e233-74f9-47f5-99e2-2fa42826ac27","Type":"ContainerStarted","Data":"c74f18d27e9ee726167514d2de19f0ccd4a0695dc5b77cc4d86dc5ba63cf6308"} Feb 16 21:37:56.063719 master-0 kubenswrapper[38936]: I0216 21:37:56.062922 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:56.125478 master-0 kubenswrapper[38936]: I0216 21:37:56.123688 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-svc\") pod \"5d30d26b-c68c-4ad5-b006-47338242fc62\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " Feb 16 21:37:56.125478 master-0 kubenswrapper[38936]: I0216 21:37:56.123877 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-nb\") pod \"5d30d26b-c68c-4ad5-b006-47338242fc62\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " Feb 16 21:37:56.125478 master-0 kubenswrapper[38936]: I0216 21:37:56.123995 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbrjb\" (UniqueName: \"kubernetes.io/projected/5d30d26b-c68c-4ad5-b006-47338242fc62-kube-api-access-zbrjb\") pod \"5d30d26b-c68c-4ad5-b006-47338242fc62\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " Feb 16 21:37:56.125478 master-0 kubenswrapper[38936]: I0216 21:37:56.124043 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-config\") pod \"5d30d26b-c68c-4ad5-b006-47338242fc62\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " Feb 16 21:37:56.125478 master-0 kubenswrapper[38936]: I0216 21:37:56.124132 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-swift-storage-0\") pod \"5d30d26b-c68c-4ad5-b006-47338242fc62\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " Feb 16 21:37:56.125478 master-0 kubenswrapper[38936]: I0216 21:37:56.124348 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-sb\") pod \"5d30d26b-c68c-4ad5-b006-47338242fc62\" (UID: \"5d30d26b-c68c-4ad5-b006-47338242fc62\") " Feb 16 21:37:56.144606 master-0 kubenswrapper[38936]: I0216 21:37:56.143517 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d30d26b-c68c-4ad5-b006-47338242fc62-kube-api-access-zbrjb" (OuterVolumeSpecName: "kube-api-access-zbrjb") pod "5d30d26b-c68c-4ad5-b006-47338242fc62" (UID: "5d30d26b-c68c-4ad5-b006-47338242fc62"). InnerVolumeSpecName "kube-api-access-zbrjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:37:56.228524 master-0 kubenswrapper[38936]: I0216 21:37:56.228472 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbrjb\" (UniqueName: \"kubernetes.io/projected/5d30d26b-c68c-4ad5-b006-47338242fc62-kube-api-access-zbrjb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:56.426877 master-0 kubenswrapper[38936]: I0216 21:37:56.426813 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5d30d26b-c68c-4ad5-b006-47338242fc62" (UID: "5d30d26b-c68c-4ad5-b006-47338242fc62"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:56.438047 master-0 kubenswrapper[38936]: I0216 21:37:56.436451 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:56.446314 master-0 kubenswrapper[38936]: I0216 21:37:56.444085 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5d30d26b-c68c-4ad5-b006-47338242fc62" (UID: "5d30d26b-c68c-4ad5-b006-47338242fc62"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:56.446314 master-0 kubenswrapper[38936]: I0216 21:37:56.446176 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-config" (OuterVolumeSpecName: "config") pod "5d30d26b-c68c-4ad5-b006-47338242fc62" (UID: "5d30d26b-c68c-4ad5-b006-47338242fc62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:56.451865 master-0 kubenswrapper[38936]: I0216 21:37:56.447165 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5d30d26b-c68c-4ad5-b006-47338242fc62" (UID: "5d30d26b-c68c-4ad5-b006-47338242fc62"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:56.474683 master-0 kubenswrapper[38936]: I0216 21:37:56.467576 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5d30d26b-c68c-4ad5-b006-47338242fc62" (UID: "5d30d26b-c68c-4ad5-b006-47338242fc62"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:37:56.540716 master-0 kubenswrapper[38936]: I0216 21:37:56.539901 38936 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:56.540716 master-0 kubenswrapper[38936]: I0216 21:37:56.539967 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:56.540716 master-0 kubenswrapper[38936]: I0216 21:37:56.539983 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:56.540716 master-0 kubenswrapper[38936]: I0216 21:37:56.539997 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d30d26b-c68c-4ad5-b006-47338242fc62-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:37:56.633086 master-0 kubenswrapper[38936]: I0216 21:37:56.632889 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" event={"ID":"9037d3ef-953a-4af9-9c81-d94587ee2d9d","Type":"ContainerStarted","Data":"ba2624715ebf9efd2ea92e95d3e1e4f500aa54e8dddab12ef65d043f0dbca2d7"} Feb 16 21:37:56.638213 master-0 kubenswrapper[38936]: I0216 21:37:56.638112 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-scheduler-0" event={"ID":"f85d31c9-7303-4e30-ba85-3362b5828482","Type":"ContainerStarted","Data":"83e301f2617e8ccaa66a653151eeea709b4a60eac21c68ec1ab0323ee8fb54b7"} Feb 16 21:37:56.640173 master-0 kubenswrapper[38936]: I0216 21:37:56.639972 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-backup-0" event={"ID":"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c","Type":"ContainerStarted","Data":"cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d"} Feb 16 21:37:56.655317 master-0 kubenswrapper[38936]: I0216 21:37:56.653970 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64949f9d84-p7hqz" event={"ID":"6d470e92-7826-4314-9ecb-7b37cd11b8e2","Type":"ContainerStarted","Data":"47ea8ef4cdc91a083bbba85843b6f6710d5786053128103ed8cf484c75a6e412"} Feb 16 21:37:56.695685 master-0 kubenswrapper[38936]: I0216 21:37:56.689871 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" event={"ID":"5d30d26b-c68c-4ad5-b006-47338242fc62","Type":"ContainerDied","Data":"dc85ccaa1d5620916efb4c72038161f760b970ccef6313205314280c926681de"} Feb 16 21:37:56.695685 master-0 kubenswrapper[38936]: I0216 21:37:56.689967 38936 scope.go:117] "RemoveContainer" containerID="43b83a15c0cd8c8684d48bb8cb54e02a6a9238baad8ab7bb57dec2c75a799b6c" Feb 16 21:37:56.695685 master-0 kubenswrapper[38936]: I0216 21:37:56.690239 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5d486cff-t8lst" Feb 16 21:37:56.708760 master-0 kubenswrapper[38936]: I0216 21:37:56.708214 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-api-0" event={"ID":"185cbfbd-402e-4012-9c97-0a8f3a579e74","Type":"ContainerStarted","Data":"4142046c75ddc9a8651c805744e802ebb8644b2edddd407e01c1c47a8f65783a"} Feb 16 21:37:56.708760 master-0 kubenswrapper[38936]: I0216 21:37:56.708499 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-9c692-api-0" podUID="185cbfbd-402e-4012-9c97-0a8f3a579e74" containerName="cinder-9c692-api-log" containerID="cri-o://d0d0184874ecb5fa7b62272d79af51318cdad8380341f19c4a14140dfab50e9f" gracePeriod=30 Feb 16 21:37:56.713274 master-0 kubenswrapper[38936]: I0216 21:37:56.710939 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-9c692-api-0" Feb 16 21:37:56.713274 master-0 kubenswrapper[38936]: I0216 21:37:56.711005 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-9c692-api-0" podUID="185cbfbd-402e-4012-9c97-0a8f3a579e74" containerName="cinder-api" containerID="cri-o://4142046c75ddc9a8651c805744e802ebb8644b2edddd407e01c1c47a8f65783a" gracePeriod=30 Feb 16 21:37:56.801285 master-0 kubenswrapper[38936]: I0216 21:37:56.798410 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" podStartSLOduration=4.519251566 podStartE2EDuration="5.798379278s" podCreationTimestamp="2026-02-16 21:37:51 +0000 UTC" firstStartedPulling="2026-02-16 21:37:53.003986195 +0000 UTC m=+903.355989557" lastFinishedPulling="2026-02-16 21:37:54.283113897 +0000 UTC m=+904.635117269" observedRunningTime="2026-02-16 21:37:56.759532269 +0000 UTC m=+907.111535631" watchObservedRunningTime="2026-02-16 21:37:56.798379278 +0000 UTC m=+907.150382640" Feb 16 21:37:56.963514 master-0 kubenswrapper[38936]: I0216 21:37:56.962829 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9c692-api-0" podStartSLOduration=4.962724457 podStartE2EDuration="4.962724457s" podCreationTimestamp="2026-02-16 21:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:56.950354114 +0000 UTC m=+907.302357476" watchObservedRunningTime="2026-02-16 21:37:56.962724457 +0000 UTC m=+907.314727819" Feb 16 21:37:56.994010 master-0 kubenswrapper[38936]: I0216 21:37:56.992263 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9c692-backup-0" podStartSLOduration=4.716370423 podStartE2EDuration="5.992243236s" podCreationTimestamp="2026-02-16 21:37:51 +0000 UTC" firstStartedPulling="2026-02-16 21:37:53.666133501 +0000 UTC m=+904.018136863" lastFinishedPulling="2026-02-16 21:37:54.942006314 +0000 UTC m=+905.294009676" observedRunningTime="2026-02-16 21:37:56.9813021 +0000 UTC m=+907.333305462" watchObservedRunningTime="2026-02-16 21:37:56.992243236 +0000 UTC m=+907.344246598" Feb 16 21:37:57.066687 master-0 kubenswrapper[38936]: I0216 21:37:57.062751 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c5d486cff-t8lst"] Feb 16 21:37:57.078678 master-0 kubenswrapper[38936]: I0216 21:37:57.077561 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c5d486cff-t8lst"] Feb 16 21:37:57.301750 master-0 kubenswrapper[38936]: I0216 21:37:57.301561 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:37:57.724021 master-0 kubenswrapper[38936]: I0216 21:37:57.723969 38936 generic.go:334] "Generic (PLEG): container finished" podID="185cbfbd-402e-4012-9c97-0a8f3a579e74" containerID="d0d0184874ecb5fa7b62272d79af51318cdad8380341f19c4a14140dfab50e9f" exitCode=143 Feb 16 21:37:57.724564 master-0 kubenswrapper[38936]: I0216 21:37:57.724040 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-api-0" event={"ID":"185cbfbd-402e-4012-9c97-0a8f3a579e74","Type":"ContainerDied","Data":"d0d0184874ecb5fa7b62272d79af51318cdad8380341f19c4a14140dfab50e9f"} Feb 16 21:37:57.728214 master-0 kubenswrapper[38936]: I0216 21:37:57.728179 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" event={"ID":"b584e233-74f9-47f5-99e2-2fa42826ac27","Type":"ContainerStarted","Data":"f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891"} Feb 16 21:37:57.730222 master-0 kubenswrapper[38936]: I0216 21:37:57.730181 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:37:57.732886 master-0 kubenswrapper[38936]: I0216 21:37:57.732848 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-scheduler-0" event={"ID":"f85d31c9-7303-4e30-ba85-3362b5828482","Type":"ContainerStarted","Data":"5a346f4089f59d2f8cd1e264c949e7170e19b53d64da5e4cb9f5bb60bd2ba184"} Feb 16 21:37:57.742562 master-0 kubenswrapper[38936]: I0216 21:37:57.741295 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64949f9d84-p7hqz" event={"ID":"6d470e92-7826-4314-9ecb-7b37cd11b8e2","Type":"ContainerStarted","Data":"defb9a28af561a177f019316552118ccc95154f90eb18819e2620510b24eccd8"} Feb 16 21:37:57.742562 master-0 kubenswrapper[38936]: I0216 21:37:57.741341 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:37:57.762373 master-0 kubenswrapper[38936]: I0216 21:37:57.761248 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" podStartSLOduration=4.761226817 podStartE2EDuration="4.761226817s" podCreationTimestamp="2026-02-16 21:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:57.758847293 +0000 UTC m=+908.110850655" watchObservedRunningTime="2026-02-16 21:37:57.761226817 +0000 UTC m=+908.113230179" Feb 16 21:37:57.794408 master-0 kubenswrapper[38936]: I0216 21:37:57.794018 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9c692-scheduler-0" podStartSLOduration=5.766434076 podStartE2EDuration="6.793993072s" podCreationTimestamp="2026-02-16 21:37:51 +0000 UTC" firstStartedPulling="2026-02-16 21:37:53.250535725 +0000 UTC m=+903.602539077" lastFinishedPulling="2026-02-16 21:37:54.278094711 +0000 UTC m=+904.630098073" observedRunningTime="2026-02-16 21:37:57.790323603 +0000 UTC m=+908.142326965" watchObservedRunningTime="2026-02-16 21:37:57.793993072 +0000 UTC m=+908.145996434" Feb 16 21:37:57.834412 master-0 kubenswrapper[38936]: I0216 21:37:57.834277 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64949f9d84-p7hqz" podStartSLOduration=4.834245899 podStartE2EDuration="4.834245899s" podCreationTimestamp="2026-02-16 21:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:37:57.823775216 +0000 UTC m=+908.175778568" watchObservedRunningTime="2026-02-16 21:37:57.834245899 +0000 UTC m=+908.186249261" Feb 16 21:37:57.893977 master-0 kubenswrapper[38936]: I0216 21:37:57.887817 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d30d26b-c68c-4ad5-b006-47338242fc62" path="/var/lib/kubelet/pods/5d30d26b-c68c-4ad5-b006-47338242fc62/volumes" Feb 16 21:37:57.893977 master-0 kubenswrapper[38936]: I0216 21:37:57.888468 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-9c692-backup-0" Feb 16 21:37:58.878164 master-0 kubenswrapper[38936]: I0216 21:37:58.877903 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64f58d4d57-rmp7g"] Feb 16 21:37:58.878832 master-0 kubenswrapper[38936]: E0216 21:37:58.878725 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d30d26b-c68c-4ad5-b006-47338242fc62" containerName="init" Feb 16 21:37:58.878832 master-0 kubenswrapper[38936]: I0216 21:37:58.878743 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d30d26b-c68c-4ad5-b006-47338242fc62" containerName="init" Feb 16 21:37:58.880264 master-0 kubenswrapper[38936]: I0216 21:37:58.879077 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d30d26b-c68c-4ad5-b006-47338242fc62" containerName="init" Feb 16 21:37:58.880337 master-0 kubenswrapper[38936]: I0216 21:37:58.880325 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:58.885708 master-0 kubenswrapper[38936]: I0216 21:37:58.882724 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 21:37:58.885708 master-0 kubenswrapper[38936]: I0216 21:37:58.882961 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 21:37:58.914968 master-0 kubenswrapper[38936]: I0216 21:37:58.914903 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64f58d4d57-rmp7g"] Feb 16 21:37:59.027703 master-0 kubenswrapper[38936]: I0216 21:37:59.027626 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-httpd-config\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.027932 master-0 kubenswrapper[38936]: I0216 21:37:59.027710 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-internal-tls-certs\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.027932 master-0 kubenswrapper[38936]: I0216 21:37:59.027856 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-ovndb-tls-certs\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.028009 master-0 kubenswrapper[38936]: I0216 21:37:59.027983 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-config\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.028427 master-0 kubenswrapper[38936]: I0216 21:37:59.028380 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-public-tls-certs\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.028617 master-0 kubenswrapper[38936]: I0216 21:37:59.028585 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbd2\" (UniqueName: \"kubernetes.io/projected/05c55455-d679-4237-82a3-9f1faea9119b-kube-api-access-fdbd2\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.028693 master-0 kubenswrapper[38936]: I0216 21:37:59.028645 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-combined-ca-bundle\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.137910 master-0 kubenswrapper[38936]: I0216 21:37:59.133718 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-internal-tls-certs\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.137910 master-0 kubenswrapper[38936]: I0216 21:37:59.133849 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-ovndb-tls-certs\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.137910 master-0 kubenswrapper[38936]: I0216 21:37:59.133901 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-config\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.137910 master-0 kubenswrapper[38936]: I0216 21:37:59.134029 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-public-tls-certs\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.137910 master-0 kubenswrapper[38936]: I0216 21:37:59.134104 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbd2\" (UniqueName: \"kubernetes.io/projected/05c55455-d679-4237-82a3-9f1faea9119b-kube-api-access-fdbd2\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.137910 master-0 kubenswrapper[38936]: I0216 21:37:59.134146 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-combined-ca-bundle\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.137910 master-0 kubenswrapper[38936]: I0216 21:37:59.134215 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-httpd-config\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.149777 master-0 kubenswrapper[38936]: I0216 21:37:59.149712 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-httpd-config\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.150806 master-0 kubenswrapper[38936]: I0216 21:37:59.150739 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-combined-ca-bundle\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.162674 master-0 kubenswrapper[38936]: I0216 21:37:59.157986 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-internal-tls-certs\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.162674 master-0 kubenswrapper[38936]: I0216 21:37:59.158671 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-public-tls-certs\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.162674 master-0 kubenswrapper[38936]: I0216 21:37:59.161418 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-ovndb-tls-certs\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.168663 master-0 kubenswrapper[38936]: I0216 21:37:59.164888 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/05c55455-d679-4237-82a3-9f1faea9119b-config\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.182738 master-0 kubenswrapper[38936]: I0216 21:37:59.181496 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbd2\" (UniqueName: \"kubernetes.io/projected/05c55455-d679-4237-82a3-9f1faea9119b-kube-api-access-fdbd2\") pod \"neutron-64f58d4d57-rmp7g\" (UID: \"05c55455-d679-4237-82a3-9f1faea9119b\") " pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.234681 master-0 kubenswrapper[38936]: I0216 21:37:59.230329 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:37:59.878489 master-0 kubenswrapper[38936]: I0216 21:37:59.878396 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64f58d4d57-rmp7g"] Feb 16 21:38:00.811582 master-0 kubenswrapper[38936]: I0216 21:38:00.810116 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f58d4d57-rmp7g" event={"ID":"05c55455-d679-4237-82a3-9f1faea9119b","Type":"ContainerStarted","Data":"798c9eceed44cd04070dddea34de28af21fb09f44c0a281dfe6617dde4d5032b"} Feb 16 21:38:00.811582 master-0 kubenswrapper[38936]: I0216 21:38:00.810192 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f58d4d57-rmp7g" event={"ID":"05c55455-d679-4237-82a3-9f1faea9119b","Type":"ContainerStarted","Data":"8eb2b295489e0201b4c748dbbe417746c1df1095a188157b38a13398ff95eac5"} Feb 16 21:38:00.811582 master-0 kubenswrapper[38936]: I0216 21:38:00.810203 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f58d4d57-rmp7g" event={"ID":"05c55455-d679-4237-82a3-9f1faea9119b","Type":"ContainerStarted","Data":"ce0da2cec78190b1556c36ee55466c55f3910518771164116cc6deb3b6552c50"} Feb 16 21:38:00.811582 master-0 kubenswrapper[38936]: I0216 21:38:00.810237 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:38:00.812626 master-0 kubenswrapper[38936]: I0216 21:38:00.812572 38936 generic.go:334] "Generic (PLEG): container finished" podID="5b1ea749-0e13-47db-bd37-4f269f872a0b" containerID="c1877eb7455255efbc803c552bf739892007e1d5651f37af1c6bdddd3a9edd33" exitCode=0 Feb 16 21:38:00.812728 master-0 kubenswrapper[38936]: I0216 21:38:00.812629 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-nzcsn" event={"ID":"5b1ea749-0e13-47db-bd37-4f269f872a0b","Type":"ContainerDied","Data":"c1877eb7455255efbc803c552bf739892007e1d5651f37af1c6bdddd3a9edd33"} Feb 16 21:38:00.838064 master-0 kubenswrapper[38936]: I0216 21:38:00.837305 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64f58d4d57-rmp7g" podStartSLOduration=2.837281189 podStartE2EDuration="2.837281189s" podCreationTimestamp="2026-02-16 21:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:00.831487922 +0000 UTC m=+911.183491294" watchObservedRunningTime="2026-02-16 21:38:00.837281189 +0000 UTC m=+911.189284571" Feb 16 21:38:02.376076 master-0 kubenswrapper[38936]: I0216 21:38:02.375990 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:38:02.461296 master-0 kubenswrapper[38936]: I0216 21:38:02.461235 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:02.555216 master-0 kubenswrapper[38936]: I0216 21:38:02.555121 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5b1ea749-0e13-47db-bd37-4f269f872a0b-etc-podinfo\") pod \"5b1ea749-0e13-47db-bd37-4f269f872a0b\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " Feb 16 21:38:02.555216 master-0 kubenswrapper[38936]: I0216 21:38:02.555219 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-combined-ca-bundle\") pod \"5b1ea749-0e13-47db-bd37-4f269f872a0b\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " Feb 16 21:38:02.555490 master-0 kubenswrapper[38936]: I0216 21:38:02.555290 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjg4f\" (UniqueName: \"kubernetes.io/projected/5b1ea749-0e13-47db-bd37-4f269f872a0b-kube-api-access-zjg4f\") pod \"5b1ea749-0e13-47db-bd37-4f269f872a0b\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " Feb 16 21:38:02.555490 master-0 kubenswrapper[38936]: I0216 21:38:02.555442 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data\") pod \"5b1ea749-0e13-47db-bd37-4f269f872a0b\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " Feb 16 21:38:02.555600 master-0 kubenswrapper[38936]: I0216 21:38:02.555578 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data-merged\") pod \"5b1ea749-0e13-47db-bd37-4f269f872a0b\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " Feb 16 21:38:02.555723 master-0 kubenswrapper[38936]: I0216 21:38:02.555698 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-scripts\") pod \"5b1ea749-0e13-47db-bd37-4f269f872a0b\" (UID: \"5b1ea749-0e13-47db-bd37-4f269f872a0b\") " Feb 16 21:38:02.557172 master-0 kubenswrapper[38936]: I0216 21:38:02.557135 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:02.557497 master-0 kubenswrapper[38936]: I0216 21:38:02.557440 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "5b1ea749-0e13-47db-bd37-4f269f872a0b" (UID: "5b1ea749-0e13-47db-bd37-4f269f872a0b"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:02.561990 master-0 kubenswrapper[38936]: I0216 21:38:02.561838 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b1ea749-0e13-47db-bd37-4f269f872a0b-kube-api-access-zjg4f" (OuterVolumeSpecName: "kube-api-access-zjg4f") pod "5b1ea749-0e13-47db-bd37-4f269f872a0b" (UID: "5b1ea749-0e13-47db-bd37-4f269f872a0b"). InnerVolumeSpecName "kube-api-access-zjg4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:02.562456 master-0 kubenswrapper[38936]: I0216 21:38:02.562378 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5b1ea749-0e13-47db-bd37-4f269f872a0b-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "5b1ea749-0e13-47db-bd37-4f269f872a0b" (UID: "5b1ea749-0e13-47db-bd37-4f269f872a0b"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 21:38:02.563983 master-0 kubenswrapper[38936]: I0216 21:38:02.563940 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-scripts" (OuterVolumeSpecName: "scripts") pod "5b1ea749-0e13-47db-bd37-4f269f872a0b" (UID: "5b1ea749-0e13-47db-bd37-4f269f872a0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:02.632845 master-0 kubenswrapper[38936]: I0216 21:38:02.629616 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9c692-volume-lvm-iscsi-0"] Feb 16 21:38:02.632845 master-0 kubenswrapper[38936]: I0216 21:38:02.632223 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data" (OuterVolumeSpecName: "config-data") pod "5b1ea749-0e13-47db-bd37-4f269f872a0b" (UID: "5b1ea749-0e13-47db-bd37-4f269f872a0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:02.647678 master-0 kubenswrapper[38936]: I0216 21:38:02.636896 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b1ea749-0e13-47db-bd37-4f269f872a0b" (UID: "5b1ea749-0e13-47db-bd37-4f269f872a0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:02.663751 master-0 kubenswrapper[38936]: I0216 21:38:02.660571 38936 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/5b1ea749-0e13-47db-bd37-4f269f872a0b-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:02.663751 master-0 kubenswrapper[38936]: I0216 21:38:02.660637 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:02.663751 master-0 kubenswrapper[38936]: I0216 21:38:02.660663 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjg4f\" (UniqueName: \"kubernetes.io/projected/5b1ea749-0e13-47db-bd37-4f269f872a0b-kube-api-access-zjg4f\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:02.663751 master-0 kubenswrapper[38936]: I0216 21:38:02.660675 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:02.663751 master-0 kubenswrapper[38936]: I0216 21:38:02.660684 38936 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5b1ea749-0e13-47db-bd37-4f269f872a0b-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:02.663751 master-0 kubenswrapper[38936]: I0216 21:38:02.660693 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b1ea749-0e13-47db-bd37-4f269f872a0b-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:02.797602 master-0 kubenswrapper[38936]: I0216 21:38:02.797540 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:02.842723 master-0 kubenswrapper[38936]: I0216 21:38:02.842673 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-nzcsn" Feb 16 21:38:02.843030 master-0 kubenswrapper[38936]: I0216 21:38:02.842728 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-nzcsn" event={"ID":"5b1ea749-0e13-47db-bd37-4f269f872a0b","Type":"ContainerDied","Data":"b800abc5d700ab672c6a9a9e70764e7efbb13bcb553114058cc9f5df34e3ba5e"} Feb 16 21:38:02.843030 master-0 kubenswrapper[38936]: I0216 21:38:02.842786 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b800abc5d700ab672c6a9a9e70764e7efbb13bcb553114058cc9f5df34e3ba5e" Feb 16 21:38:02.843136 master-0 kubenswrapper[38936]: I0216 21:38:02.843039 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" podUID="9037d3ef-953a-4af9-9c81-d94587ee2d9d" containerName="cinder-volume" containerID="cri-o://de58beeba240ff51ab416be9d35eb6bbffc3258b096e90d2d4f9ebb61a7b8240" gracePeriod=30 Feb 16 21:38:02.843386 master-0 kubenswrapper[38936]: I0216 21:38:02.843104 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" podUID="9037d3ef-953a-4af9-9c81-d94587ee2d9d" containerName="probe" containerID="cri-o://ba2624715ebf9efd2ea92e95d3e1e4f500aa54e8dddab12ef65d043f0dbca2d7" gracePeriod=30 Feb 16 21:38:02.971152 master-0 kubenswrapper[38936]: I0216 21:38:02.971069 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9c692-scheduler-0"] Feb 16 21:38:03.157013 master-0 kubenswrapper[38936]: I0216 21:38:03.156950 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:03.252089 master-0 kubenswrapper[38936]: I0216 21:38:03.252007 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9c692-backup-0"] Feb 16 21:38:03.523987 master-0 kubenswrapper[38936]: I0216 21:38:03.523554 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-q98pv"] Feb 16 21:38:03.524702 master-0 kubenswrapper[38936]: E0216 21:38:03.524207 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1ea749-0e13-47db-bd37-4f269f872a0b" containerName="ironic-db-sync" Feb 16 21:38:03.524702 master-0 kubenswrapper[38936]: I0216 21:38:03.524225 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1ea749-0e13-47db-bd37-4f269f872a0b" containerName="ironic-db-sync" Feb 16 21:38:03.524702 master-0 kubenswrapper[38936]: E0216 21:38:03.524251 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b1ea749-0e13-47db-bd37-4f269f872a0b" containerName="init" Feb 16 21:38:03.524702 master-0 kubenswrapper[38936]: I0216 21:38:03.524259 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b1ea749-0e13-47db-bd37-4f269f872a0b" containerName="init" Feb 16 21:38:03.524702 master-0 kubenswrapper[38936]: I0216 21:38:03.524514 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b1ea749-0e13-47db-bd37-4f269f872a0b" containerName="ironic-db-sync" Feb 16 21:38:03.578567 master-0 kubenswrapper[38936]: I0216 21:38:03.578503 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-q98pv" Feb 16 21:38:03.612600 master-0 kubenswrapper[38936]: I0216 21:38:03.608148 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-q98pv"] Feb 16 21:38:03.682040 master-0 kubenswrapper[38936]: I0216 21:38:03.675928 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-57f476567b-fwqws"] Feb 16 21:38:03.754354 master-0 kubenswrapper[38936]: I0216 21:38:03.752852 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr62\" (UniqueName: \"kubernetes.io/projected/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-kube-api-access-plr62\") pod \"ironic-inspector-db-create-q98pv\" (UID: \"53ca02e3-b979-4ed3-82e5-ce0850aa85f3\") " pod="openstack/ironic-inspector-db-create-q98pv" Feb 16 21:38:03.762674 master-0 kubenswrapper[38936]: I0216 21:38:03.758971 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:03.762878 master-0 kubenswrapper[38936]: I0216 21:38:03.762640 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-operator-scripts\") pod \"ironic-inspector-db-create-q98pv\" (UID: \"53ca02e3-b979-4ed3-82e5-ce0850aa85f3\") " pod="openstack/ironic-inspector-db-create-q98pv" Feb 16 21:38:03.766182 master-0 kubenswrapper[38936]: I0216 21:38:03.766142 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Feb 16 21:38:03.776346 master-0 kubenswrapper[38936]: I0216 21:38:03.767049 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-57f476567b-fwqws"] Feb 16 21:38:03.808859 master-0 kubenswrapper[38936]: I0216 21:38:03.806478 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-1991-account-create-update-vb2d9"] Feb 16 21:38:03.811806 master-0 kubenswrapper[38936]: I0216 21:38:03.811774 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" Feb 16 21:38:03.825934 master-0 kubenswrapper[38936]: I0216 21:38:03.822546 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Feb 16 21:38:03.852748 master-0 kubenswrapper[38936]: I0216 21:38:03.851707 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-1991-account-create-update-vb2d9"] Feb 16 21:38:03.902570 master-0 kubenswrapper[38936]: I0216 21:38:03.901781 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4fwb\" (UniqueName: \"kubernetes.io/projected/cfcdcd18-dd01-45c8-afd4-ec72a986d582-kube-api-access-j4fwb\") pod \"ironic-neutron-agent-57f476567b-fwqws\" (UID: \"cfcdcd18-dd01-45c8-afd4-ec72a986d582\") " pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:03.902570 master-0 kubenswrapper[38936]: I0216 21:38:03.901875 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-operator-scripts\") pod \"ironic-inspector-db-create-q98pv\" (UID: \"53ca02e3-b979-4ed3-82e5-ce0850aa85f3\") " pod="openstack/ironic-inspector-db-create-q98pv" Feb 16 21:38:03.902570 master-0 kubenswrapper[38936]: I0216 21:38:03.901934 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcdcd18-dd01-45c8-afd4-ec72a986d582-combined-ca-bundle\") pod \"ironic-neutron-agent-57f476567b-fwqws\" (UID: \"cfcdcd18-dd01-45c8-afd4-ec72a986d582\") " pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:03.902905 master-0 kubenswrapper[38936]: I0216 21:38:03.902606 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-operator-scripts\") pod \"ironic-inspector-db-create-q98pv\" (UID: \"53ca02e3-b979-4ed3-82e5-ce0850aa85f3\") " pod="openstack/ironic-inspector-db-create-q98pv" Feb 16 21:38:03.902905 master-0 kubenswrapper[38936]: I0216 21:38:03.902761 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plr62\" (UniqueName: \"kubernetes.io/projected/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-kube-api-access-plr62\") pod \"ironic-inspector-db-create-q98pv\" (UID: \"53ca02e3-b979-4ed3-82e5-ce0850aa85f3\") " pod="openstack/ironic-inspector-db-create-q98pv" Feb 16 21:38:03.903528 master-0 kubenswrapper[38936]: I0216 21:38:03.903069 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfcdcd18-dd01-45c8-afd4-ec72a986d582-config\") pod \"ironic-neutron-agent-57f476567b-fwqws\" (UID: \"cfcdcd18-dd01-45c8-afd4-ec72a986d582\") " pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:03.941481 master-0 kubenswrapper[38936]: I0216 21:38:03.941427 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr62\" (UniqueName: \"kubernetes.io/projected/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-kube-api-access-plr62\") pod \"ironic-inspector-db-create-q98pv\" (UID: \"53ca02e3-b979-4ed3-82e5-ce0850aa85f3\") " pod="openstack/ironic-inspector-db-create-q98pv" Feb 16 21:38:03.957596 master-0 kubenswrapper[38936]: I0216 21:38:03.956515 38936 generic.go:334] "Generic (PLEG): container finished" podID="9037d3ef-953a-4af9-9c81-d94587ee2d9d" containerID="de58beeba240ff51ab416be9d35eb6bbffc3258b096e90d2d4f9ebb61a7b8240" exitCode=0 Feb 16 21:38:03.957596 master-0 kubenswrapper[38936]: I0216 21:38:03.956783 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-9c692-backup-0" podUID="0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" containerName="cinder-backup" containerID="cri-o://d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b" gracePeriod=30 Feb 16 21:38:03.957596 master-0 kubenswrapper[38936]: I0216 21:38:03.957130 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-9c692-scheduler-0" podUID="f85d31c9-7303-4e30-ba85-3362b5828482" containerName="cinder-scheduler" containerID="cri-o://83e301f2617e8ccaa66a653151eeea709b4a60eac21c68ec1ab0323ee8fb54b7" gracePeriod=30 Feb 16 21:38:03.957596 master-0 kubenswrapper[38936]: I0216 21:38:03.957470 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-9c692-backup-0" podUID="0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" containerName="probe" containerID="cri-o://cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d" gracePeriod=30 Feb 16 21:38:03.957596 master-0 kubenswrapper[38936]: I0216 21:38:03.957532 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-9c692-scheduler-0" podUID="f85d31c9-7303-4e30-ba85-3362b5828482" containerName="probe" containerID="cri-o://5a346f4089f59d2f8cd1e264c949e7170e19b53d64da5e4cb9f5bb60bd2ba184" gracePeriod=30 Feb 16 21:38:03.961079 master-0 kubenswrapper[38936]: I0216 21:38:03.961034 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" event={"ID":"9037d3ef-953a-4af9-9c81-d94587ee2d9d","Type":"ContainerDied","Data":"de58beeba240ff51ab416be9d35eb6bbffc3258b096e90d2d4f9ebb61a7b8240"} Feb 16 21:38:03.961159 master-0 kubenswrapper[38936]: I0216 21:38:03.961081 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b95d794ff-8msjt"] Feb 16 21:38:03.961159 master-0 kubenswrapper[38936]: I0216 21:38:03.961099 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-596cdf67df-snjb9"] Feb 16 21:38:03.961946 master-0 kubenswrapper[38936]: I0216 21:38:03.961806 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" podUID="b584e233-74f9-47f5-99e2-2fa42826ac27" containerName="dnsmasq-dns" containerID="cri-o://f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891" gracePeriod=10 Feb 16 21:38:03.963778 master-0 kubenswrapper[38936]: I0216 21:38:03.963749 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:38:03.963842 master-0 kubenswrapper[38936]: I0216 21:38:03.963822 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:03.976852 master-0 kubenswrapper[38936]: I0216 21:38:03.976680 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-596cdf67df-snjb9"] Feb 16 21:38:03.989516 master-0 kubenswrapper[38936]: I0216 21:38:03.989456 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-q98pv" Feb 16 21:38:03.997804 master-0 kubenswrapper[38936]: I0216 21:38:03.995255 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-85df85647b-4lmvj"] Feb 16 21:38:03.998310 master-0 kubenswrapper[38936]: I0216 21:38:03.998288 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.002755 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.003307 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.003503 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.003638 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.005217 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-nb\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.005295 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-sb\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.005334 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4fwb\" (UniqueName: \"kubernetes.io/projected/cfcdcd18-dd01-45c8-afd4-ec72a986d582-kube-api-access-j4fwb\") pod \"ironic-neutron-agent-57f476567b-fwqws\" (UID: \"cfcdcd18-dd01-45c8-afd4-ec72a986d582\") " pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.005358 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk65f\" (UniqueName: \"kubernetes.io/projected/3182998b-e6c3-4733-a374-23e11d68c55a-kube-api-access-xk65f\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.005379 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcdcd18-dd01-45c8-afd4-ec72a986d582-combined-ca-bundle\") pod \"ironic-neutron-agent-57f476567b-fwqws\" (UID: \"cfcdcd18-dd01-45c8-afd4-ec72a986d582\") " pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.005425 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-config\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.005449 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkgd2\" (UniqueName: \"kubernetes.io/projected/f3fc7857-f230-4a40-8fb6-9b01dd29c502-kube-api-access-qkgd2\") pod \"ironic-inspector-1991-account-create-update-vb2d9\" (UID: \"f3fc7857-f230-4a40-8fb6-9b01dd29c502\") " pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.005486 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-svc\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.005535 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3fc7857-f230-4a40-8fb6-9b01dd29c502-operator-scripts\") pod \"ironic-inspector-1991-account-create-update-vb2d9\" (UID: \"f3fc7857-f230-4a40-8fb6-9b01dd29c502\") " pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.005555 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfcdcd18-dd01-45c8-afd4-ec72a986d582-config\") pod \"ironic-neutron-agent-57f476567b-fwqws\" (UID: \"cfcdcd18-dd01-45c8-afd4-ec72a986d582\") " pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.005579 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-swift-storage-0\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.011186 master-0 kubenswrapper[38936]: I0216 21:38:04.006988 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 21:38:04.017627 master-0 kubenswrapper[38936]: I0216 21:38:04.017583 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfcdcd18-dd01-45c8-afd4-ec72a986d582-combined-ca-bundle\") pod \"ironic-neutron-agent-57f476567b-fwqws\" (UID: \"cfcdcd18-dd01-45c8-afd4-ec72a986d582\") " pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:04.019317 master-0 kubenswrapper[38936]: I0216 21:38:04.019267 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfcdcd18-dd01-45c8-afd4-ec72a986d582-config\") pod \"ironic-neutron-agent-57f476567b-fwqws\" (UID: \"cfcdcd18-dd01-45c8-afd4-ec72a986d582\") " pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:04.032540 master-0 kubenswrapper[38936]: I0216 21:38:04.032385 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-85df85647b-4lmvj"] Feb 16 21:38:04.050954 master-0 kubenswrapper[38936]: I0216 21:38:04.050359 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4fwb\" (UniqueName: \"kubernetes.io/projected/cfcdcd18-dd01-45c8-afd4-ec72a986d582-kube-api-access-j4fwb\") pod \"ironic-neutron-agent-57f476567b-fwqws\" (UID: \"cfcdcd18-dd01-45c8-afd4-ec72a986d582\") " pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115272 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk65f\" (UniqueName: \"kubernetes.io/projected/3182998b-e6c3-4733-a374-23e11d68c55a-kube-api-access-xk65f\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115386 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-config\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115416 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkgd2\" (UniqueName: \"kubernetes.io/projected/f3fc7857-f230-4a40-8fb6-9b01dd29c502-kube-api-access-qkgd2\") pod \"ironic-inspector-1991-account-create-update-vb2d9\" (UID: \"f3fc7857-f230-4a40-8fb6-9b01dd29c502\") " pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115459 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-svc\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115489 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/28720828-7566-4fb7-a4ff-ac6e548d9408-etc-podinfo\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115541 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3fc7857-f230-4a40-8fb6-9b01dd29c502-operator-scripts\") pod \"ironic-inspector-1991-account-create-update-vb2d9\" (UID: \"f3fc7857-f230-4a40-8fb6-9b01dd29c502\") " pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115566 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-combined-ca-bundle\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115589 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-swift-storage-0\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115609 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-merged\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115627 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115702 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-nb\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115730 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-custom\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115784 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm5k7\" (UniqueName: \"kubernetes.io/projected/28720828-7566-4fb7-a4ff-ac6e548d9408-kube-api-access-sm5k7\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115804 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-scripts\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115836 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-sb\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.115855 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-logs\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.116511 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3fc7857-f230-4a40-8fb6-9b01dd29c502-operator-scripts\") pod \"ironic-inspector-1991-account-create-update-vb2d9\" (UID: \"f3fc7857-f230-4a40-8fb6-9b01dd29c502\") " pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.117473 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-nb\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.117509 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-config\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.120720 master-0 kubenswrapper[38936]: I0216 21:38:04.118309 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-svc\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.143067 master-0 kubenswrapper[38936]: I0216 21:38:04.139529 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-sb\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.153712 master-0 kubenswrapper[38936]: I0216 21:38:04.144527 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-swift-storage-0\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.153712 master-0 kubenswrapper[38936]: I0216 21:38:04.149585 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkgd2\" (UniqueName: \"kubernetes.io/projected/f3fc7857-f230-4a40-8fb6-9b01dd29c502-kube-api-access-qkgd2\") pod \"ironic-inspector-1991-account-create-update-vb2d9\" (UID: \"f3fc7857-f230-4a40-8fb6-9b01dd29c502\") " pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" Feb 16 21:38:04.153712 master-0 kubenswrapper[38936]: I0216 21:38:04.151578 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk65f\" (UniqueName: \"kubernetes.io/projected/3182998b-e6c3-4733-a374-23e11d68c55a-kube-api-access-xk65f\") pod \"dnsmasq-dns-596cdf67df-snjb9\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.153712 master-0 kubenswrapper[38936]: I0216 21:38:04.151693 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:04.197031 master-0 kubenswrapper[38936]: I0216 21:38:04.196942 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" Feb 16 21:38:04.218844 master-0 kubenswrapper[38936]: I0216 21:38:04.218486 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-combined-ca-bundle\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.218844 master-0 kubenswrapper[38936]: I0216 21:38:04.218560 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-merged\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.218844 master-0 kubenswrapper[38936]: I0216 21:38:04.218579 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.218844 master-0 kubenswrapper[38936]: I0216 21:38:04.218635 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-custom\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.224743 master-0 kubenswrapper[38936]: I0216 21:38:04.219510 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-merged\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.224743 master-0 kubenswrapper[38936]: I0216 21:38:04.219603 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm5k7\" (UniqueName: \"kubernetes.io/projected/28720828-7566-4fb7-a4ff-ac6e548d9408-kube-api-access-sm5k7\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.224743 master-0 kubenswrapper[38936]: I0216 21:38:04.219665 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-scripts\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.224743 master-0 kubenswrapper[38936]: I0216 21:38:04.219764 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-logs\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.224743 master-0 kubenswrapper[38936]: I0216 21:38:04.220148 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/28720828-7566-4fb7-a4ff-ac6e548d9408-etc-podinfo\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.224743 master-0 kubenswrapper[38936]: I0216 21:38:04.223809 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-combined-ca-bundle\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.240641 master-0 kubenswrapper[38936]: I0216 21:38:04.227081 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-logs\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.240641 master-0 kubenswrapper[38936]: I0216 21:38:04.233383 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-scripts\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.240641 master-0 kubenswrapper[38936]: I0216 21:38:04.233922 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-custom\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.240641 master-0 kubenswrapper[38936]: I0216 21:38:04.234467 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.240641 master-0 kubenswrapper[38936]: I0216 21:38:04.234859 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/28720828-7566-4fb7-a4ff-ac6e548d9408-etc-podinfo\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.245639 master-0 kubenswrapper[38936]: I0216 21:38:04.245598 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm5k7\" (UniqueName: \"kubernetes.io/projected/28720828-7566-4fb7-a4ff-ac6e548d9408-kube-api-access-sm5k7\") pod \"ironic-85df85647b-4lmvj\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.290518 master-0 kubenswrapper[38936]: I0216 21:38:04.290112 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" podUID="b584e233-74f9-47f5-99e2-2fa42826ac27" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.226:5353: connect: connection refused" Feb 16 21:38:04.849664 master-0 kubenswrapper[38936]: I0216 21:38:04.849594 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:04.874834 master-0 kubenswrapper[38936]: I0216 21:38:04.874761 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:04.904288 master-0 kubenswrapper[38936]: I0216 21:38:04.904231 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:38:05.039076 master-0 kubenswrapper[38936]: I0216 21:38:05.037746 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-sb\") pod \"b584e233-74f9-47f5-99e2-2fa42826ac27\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " Feb 16 21:38:05.039076 master-0 kubenswrapper[38936]: I0216 21:38:05.037966 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-config\") pod \"b584e233-74f9-47f5-99e2-2fa42826ac27\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " Feb 16 21:38:05.039076 master-0 kubenswrapper[38936]: I0216 21:38:05.038085 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-nb\") pod \"b584e233-74f9-47f5-99e2-2fa42826ac27\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " Feb 16 21:38:05.039076 master-0 kubenswrapper[38936]: I0216 21:38:05.038216 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-swift-storage-0\") pod \"b584e233-74f9-47f5-99e2-2fa42826ac27\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " Feb 16 21:38:05.039076 master-0 kubenswrapper[38936]: I0216 21:38:05.038271 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-svc\") pod \"b584e233-74f9-47f5-99e2-2fa42826ac27\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " Feb 16 21:38:05.039076 master-0 kubenswrapper[38936]: I0216 21:38:05.038343 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c65zf\" (UniqueName: \"kubernetes.io/projected/b584e233-74f9-47f5-99e2-2fa42826ac27-kube-api-access-c65zf\") pod \"b584e233-74f9-47f5-99e2-2fa42826ac27\" (UID: \"b584e233-74f9-47f5-99e2-2fa42826ac27\") " Feb 16 21:38:05.135337 master-0 kubenswrapper[38936]: I0216 21:38:05.120834 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b584e233-74f9-47f5-99e2-2fa42826ac27-kube-api-access-c65zf" (OuterVolumeSpecName: "kube-api-access-c65zf") pod "b584e233-74f9-47f5-99e2-2fa42826ac27" (UID: "b584e233-74f9-47f5-99e2-2fa42826ac27"). InnerVolumeSpecName "kube-api-access-c65zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:05.135337 master-0 kubenswrapper[38936]: I0216 21:38:05.121710 38936 generic.go:334] "Generic (PLEG): container finished" podID="b584e233-74f9-47f5-99e2-2fa42826ac27" containerID="f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891" exitCode=0 Feb 16 21:38:05.135337 master-0 kubenswrapper[38936]: I0216 21:38:05.121810 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" event={"ID":"b584e233-74f9-47f5-99e2-2fa42826ac27","Type":"ContainerDied","Data":"f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891"} Feb 16 21:38:05.135337 master-0 kubenswrapper[38936]: I0216 21:38:05.121840 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" event={"ID":"b584e233-74f9-47f5-99e2-2fa42826ac27","Type":"ContainerDied","Data":"c74f18d27e9ee726167514d2de19f0ccd4a0695dc5b77cc4d86dc5ba63cf6308"} Feb 16 21:38:05.135337 master-0 kubenswrapper[38936]: I0216 21:38:05.121860 38936 scope.go:117] "RemoveContainer" containerID="f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891" Feb 16 21:38:05.135337 master-0 kubenswrapper[38936]: I0216 21:38:05.121915 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b95d794ff-8msjt" Feb 16 21:38:05.152603 master-0 kubenswrapper[38936]: I0216 21:38:05.136949 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b584e233-74f9-47f5-99e2-2fa42826ac27" (UID: "b584e233-74f9-47f5-99e2-2fa42826ac27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:05.152603 master-0 kubenswrapper[38936]: I0216 21:38:05.139323 38936 generic.go:334] "Generic (PLEG): container finished" podID="9037d3ef-953a-4af9-9c81-d94587ee2d9d" containerID="ba2624715ebf9efd2ea92e95d3e1e4f500aa54e8dddab12ef65d043f0dbca2d7" exitCode=0 Feb 16 21:38:05.152603 master-0 kubenswrapper[38936]: I0216 21:38:05.139377 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" event={"ID":"9037d3ef-953a-4af9-9c81-d94587ee2d9d","Type":"ContainerDied","Data":"ba2624715ebf9efd2ea92e95d3e1e4f500aa54e8dddab12ef65d043f0dbca2d7"} Feb 16 21:38:05.152603 master-0 kubenswrapper[38936]: I0216 21:38:05.142527 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:05.152603 master-0 kubenswrapper[38936]: I0216 21:38:05.142778 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c65zf\" (UniqueName: \"kubernetes.io/projected/b584e233-74f9-47f5-99e2-2fa42826ac27-kube-api-access-c65zf\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:05.217784 master-0 kubenswrapper[38936]: I0216 21:38:05.217610 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-config" (OuterVolumeSpecName: "config") pod "b584e233-74f9-47f5-99e2-2fa42826ac27" (UID: "b584e233-74f9-47f5-99e2-2fa42826ac27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:05.233457 master-0 kubenswrapper[38936]: I0216 21:38:05.233310 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b584e233-74f9-47f5-99e2-2fa42826ac27" (UID: "b584e233-74f9-47f5-99e2-2fa42826ac27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:05.247778 master-0 kubenswrapper[38936]: I0216 21:38:05.247695 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b584e233-74f9-47f5-99e2-2fa42826ac27" (UID: "b584e233-74f9-47f5-99e2-2fa42826ac27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:05.248416 master-0 kubenswrapper[38936]: I0216 21:38:05.248341 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:05.248494 master-0 kubenswrapper[38936]: I0216 21:38:05.248426 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:05.248494 master-0 kubenswrapper[38936]: I0216 21:38:05.248444 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:05.286145 master-0 kubenswrapper[38936]: W0216 21:38:05.286028 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53ca02e3_b979_4ed3_82e5_ce0850aa85f3.slice/crio-cfa3bc03cae78d0ffe7bb0f75032bc00bb3d74a44b15dc62c5c9cf40b132ff45 WatchSource:0}: Error finding container cfa3bc03cae78d0ffe7bb0f75032bc00bb3d74a44b15dc62c5c9cf40b132ff45: Status 404 returned error can't find the container with id cfa3bc03cae78d0ffe7bb0f75032bc00bb3d74a44b15dc62c5c9cf40b132ff45 Feb 16 21:38:05.336849 master-0 kubenswrapper[38936]: I0216 21:38:05.336506 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-q98pv"] Feb 16 21:38:05.345328 master-0 kubenswrapper[38936]: I0216 21:38:05.345274 38936 scope.go:117] "RemoveContainer" containerID="f5796dccbe40452a611853e55738018ab5202663551ff1db3862d38e0617ed50" Feb 16 21:38:05.385806 master-0 kubenswrapper[38936]: I0216 21:38:05.385587 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b584e233-74f9-47f5-99e2-2fa42826ac27" (UID: "b584e233-74f9-47f5-99e2-2fa42826ac27"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:05.403410 master-0 kubenswrapper[38936]: I0216 21:38:05.403131 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-57f476567b-fwqws"] Feb 16 21:38:05.448301 master-0 kubenswrapper[38936]: I0216 21:38:05.448223 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-1991-account-create-update-vb2d9"] Feb 16 21:38:05.455797 master-0 kubenswrapper[38936]: I0216 21:38:05.455602 38936 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b584e233-74f9-47f5-99e2-2fa42826ac27-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:05.470098 master-0 kubenswrapper[38936]: W0216 21:38:05.470028 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfcdcd18_dd01_45c8_afd4_ec72a986d582.slice/crio-763690b73967d597a5166973b56740806a488465bc1b37e6f47fb75c8e333e74 WatchSource:0}: Error finding container 763690b73967d597a5166973b56740806a488465bc1b37e6f47fb75c8e333e74: Status 404 returned error can't find the container with id 763690b73967d597a5166973b56740806a488465bc1b37e6f47fb75c8e333e74 Feb 16 21:38:05.675759 master-0 kubenswrapper[38936]: I0216 21:38:05.674780 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Feb 16 21:38:05.675759 master-0 kubenswrapper[38936]: E0216 21:38:05.675300 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b584e233-74f9-47f5-99e2-2fa42826ac27" containerName="init" Feb 16 21:38:05.675759 master-0 kubenswrapper[38936]: I0216 21:38:05.675314 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b584e233-74f9-47f5-99e2-2fa42826ac27" containerName="init" Feb 16 21:38:05.675759 master-0 kubenswrapper[38936]: E0216 21:38:05.675389 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b584e233-74f9-47f5-99e2-2fa42826ac27" containerName="dnsmasq-dns" Feb 16 21:38:05.675759 master-0 kubenswrapper[38936]: I0216 21:38:05.675395 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b584e233-74f9-47f5-99e2-2fa42826ac27" containerName="dnsmasq-dns" Feb 16 21:38:05.675759 master-0 kubenswrapper[38936]: I0216 21:38:05.675618 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b584e233-74f9-47f5-99e2-2fa42826ac27" containerName="dnsmasq-dns" Feb 16 21:38:05.679211 master-0 kubenswrapper[38936]: I0216 21:38:05.679184 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 16 21:38:05.683934 master-0 kubenswrapper[38936]: I0216 21:38:05.682940 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Feb 16 21:38:05.683934 master-0 kubenswrapper[38936]: I0216 21:38:05.683254 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Feb 16 21:38:05.708689 master-0 kubenswrapper[38936]: I0216 21:38:05.705774 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 16 21:38:05.774669 master-0 kubenswrapper[38936]: I0216 21:38:05.769398 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-44512dbe-c790-4488-972f-62c15620e662\" (UniqueName: \"kubernetes.io/csi/topolvm.io^150e29e3-d4ae-4987-ad7e-f808e7829436\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.774669 master-0 kubenswrapper[38936]: I0216 21:38:05.772230 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/37c815ef-1c3d-4b2a-b748-de04b8c4412c-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.774669 master-0 kubenswrapper[38936]: I0216 21:38:05.772339 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdw8m\" (UniqueName: \"kubernetes.io/projected/37c815ef-1c3d-4b2a-b748-de04b8c4412c-kube-api-access-rdw8m\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.774669 master-0 kubenswrapper[38936]: I0216 21:38:05.772373 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-scripts\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.774669 master-0 kubenswrapper[38936]: I0216 21:38:05.772445 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.774669 master-0 kubenswrapper[38936]: I0216 21:38:05.772582 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-config-data\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.774669 master-0 kubenswrapper[38936]: I0216 21:38:05.772622 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/37c815ef-1c3d-4b2a-b748-de04b8c4412c-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.774669 master-0 kubenswrapper[38936]: I0216 21:38:05.772836 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.814615 master-0 kubenswrapper[38936]: I0216 21:38:05.814545 38936 scope.go:117] "RemoveContainer" containerID="f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891" Feb 16 21:38:05.819915 master-0 kubenswrapper[38936]: E0216 21:38:05.819837 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891\": container with ID starting with f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891 not found: ID does not exist" containerID="f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891" Feb 16 21:38:05.820094 master-0 kubenswrapper[38936]: I0216 21:38:05.819909 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891"} err="failed to get container status \"f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891\": rpc error: code = NotFound desc = could not find container \"f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891\": container with ID starting with f126a8f06b14123d03aae72c077871af5e5543cf85cb9831b970c01a98e4b891 not found: ID does not exist" Feb 16 21:38:05.820094 master-0 kubenswrapper[38936]: I0216 21:38:05.819946 38936 scope.go:117] "RemoveContainer" containerID="f5796dccbe40452a611853e55738018ab5202663551ff1db3862d38e0617ed50" Feb 16 21:38:05.822460 master-0 kubenswrapper[38936]: E0216 21:38:05.822389 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5796dccbe40452a611853e55738018ab5202663551ff1db3862d38e0617ed50\": container with ID starting with f5796dccbe40452a611853e55738018ab5202663551ff1db3862d38e0617ed50 not found: ID does not exist" containerID="f5796dccbe40452a611853e55738018ab5202663551ff1db3862d38e0617ed50" Feb 16 21:38:05.822528 master-0 kubenswrapper[38936]: I0216 21:38:05.822467 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5796dccbe40452a611853e55738018ab5202663551ff1db3862d38e0617ed50"} err="failed to get container status \"f5796dccbe40452a611853e55738018ab5202663551ff1db3862d38e0617ed50\": rpc error: code = NotFound desc = could not find container \"f5796dccbe40452a611853e55738018ab5202663551ff1db3862d38e0617ed50\": container with ID starting with f5796dccbe40452a611853e55738018ab5202663551ff1db3862d38e0617ed50 not found: ID does not exist" Feb 16 21:38:05.845072 master-0 kubenswrapper[38936]: I0216 21:38:05.843958 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-596cdf67df-snjb9"] Feb 16 21:38:05.860922 master-0 kubenswrapper[38936]: I0216 21:38:05.860881 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:05.875695 master-0 kubenswrapper[38936]: I0216 21:38:05.870800 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-85df85647b-4lmvj"] Feb 16 21:38:05.882901 master-0 kubenswrapper[38936]: I0216 21:38:05.882621 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.882901 master-0 kubenswrapper[38936]: I0216 21:38:05.882755 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-44512dbe-c790-4488-972f-62c15620e662\" (UniqueName: \"kubernetes.io/csi/topolvm.io^150e29e3-d4ae-4987-ad7e-f808e7829436\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.882901 master-0 kubenswrapper[38936]: I0216 21:38:05.882806 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/37c815ef-1c3d-4b2a-b748-de04b8c4412c-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.882901 master-0 kubenswrapper[38936]: I0216 21:38:05.882848 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdw8m\" (UniqueName: \"kubernetes.io/projected/37c815ef-1c3d-4b2a-b748-de04b8c4412c-kube-api-access-rdw8m\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.882901 master-0 kubenswrapper[38936]: I0216 21:38:05.882866 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-scripts\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.882901 master-0 kubenswrapper[38936]: I0216 21:38:05.882903 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.883193 master-0 kubenswrapper[38936]: I0216 21:38:05.882971 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-config-data\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.883193 master-0 kubenswrapper[38936]: I0216 21:38:05.882996 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/37c815ef-1c3d-4b2a-b748-de04b8c4412c-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.889499 master-0 kubenswrapper[38936]: I0216 21:38:05.889447 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/37c815ef-1c3d-4b2a-b748-de04b8c4412c-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.890114 master-0 kubenswrapper[38936]: I0216 21:38:05.890066 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:38:05.890178 master-0 kubenswrapper[38936]: I0216 21:38:05.890130 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-44512dbe-c790-4488-972f-62c15620e662\" (UniqueName: \"kubernetes.io/csi/topolvm.io^150e29e3-d4ae-4987-ad7e-f808e7829436\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/9e09aca92bd42831e4bfd1ac4c4c9cc9d91a92c0b9bf712c7e3d055e81e20b1e/globalmount\"" pod="openstack/ironic-conductor-0" Feb 16 21:38:05.903943 master-0 kubenswrapper[38936]: I0216 21:38:05.899205 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/37c815ef-1c3d-4b2a-b748-de04b8c4412c-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.910456 master-0 kubenswrapper[38936]: I0216 21:38:05.910419 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.912692 master-0 kubenswrapper[38936]: I0216 21:38:05.912626 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.931955 master-0 kubenswrapper[38936]: I0216 21:38:05.928592 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-config-data\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.940550 master-0 kubenswrapper[38936]: I0216 21:38:05.940521 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37c815ef-1c3d-4b2a-b748-de04b8c4412c-scripts\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.941240 master-0 kubenswrapper[38936]: I0216 21:38:05.941192 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdw8m\" (UniqueName: \"kubernetes.io/projected/37c815ef-1c3d-4b2a-b748-de04b8c4412c-kube-api-access-rdw8m\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:05.967368 master-0 kubenswrapper[38936]: I0216 21:38:05.967324 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b95d794ff-8msjt"] Feb 16 21:38:05.988739 master-0 kubenswrapper[38936]: I0216 21:38:05.988641 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-iscsi\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.988994 master-0 kubenswrapper[38936]: I0216 21:38:05.988806 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-run\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.988994 master-0 kubenswrapper[38936]: I0216 21:38:05.988835 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-combined-ca-bundle\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.988994 master-0 kubenswrapper[38936]: I0216 21:38:05.988956 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-machine-id\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.989127 master-0 kubenswrapper[38936]: I0216 21:38:05.989003 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-scripts\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.989127 master-0 kubenswrapper[38936]: I0216 21:38:05.989047 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data-custom\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.989127 master-0 kubenswrapper[38936]: I0216 21:38:05.989065 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-lib-cinder\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.989257 master-0 kubenswrapper[38936]: I0216 21:38:05.989165 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-sys\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.989257 master-0 kubenswrapper[38936]: I0216 21:38:05.989184 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-cinder\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.989257 master-0 kubenswrapper[38936]: I0216 21:38:05.989256 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.989380 master-0 kubenswrapper[38936]: I0216 21:38:05.989284 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-brick\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.989424 master-0 kubenswrapper[38936]: I0216 21:38:05.989416 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-dev\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.989482 master-0 kubenswrapper[38936]: I0216 21:38:05.989443 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7snx8\" (UniqueName: \"kubernetes.io/projected/9037d3ef-953a-4af9-9c81-d94587ee2d9d-kube-api-access-7snx8\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.989524 master-0 kubenswrapper[38936]: I0216 21:38:05.989493 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-lib-modules\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.989568 master-0 kubenswrapper[38936]: I0216 21:38:05.989552 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-nvme\") pod \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\" (UID: \"9037d3ef-953a-4af9-9c81-d94587ee2d9d\") " Feb 16 21:38:05.990770 master-0 kubenswrapper[38936]: I0216 21:38:05.990748 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:05.990853 master-0 kubenswrapper[38936]: I0216 21:38:05.990781 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-run" (OuterVolumeSpecName: "run") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:05.993030 master-0 kubenswrapper[38936]: I0216 21:38:05.992800 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:05.993030 master-0 kubenswrapper[38936]: I0216 21:38:05.992808 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-dev" (OuterVolumeSpecName: "dev") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:05.993030 master-0 kubenswrapper[38936]: I0216 21:38:05.992891 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-sys" (OuterVolumeSpecName: "sys") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:05.993553 master-0 kubenswrapper[38936]: I0216 21:38:05.993533 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:05.994214 master-0 kubenswrapper[38936]: I0216 21:38:05.994158 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:05.994293 master-0 kubenswrapper[38936]: I0216 21:38:05.994215 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:05.994293 master-0 kubenswrapper[38936]: I0216 21:38:05.994243 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:05.994293 master-0 kubenswrapper[38936]: I0216 21:38:05.994268 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:06.007293 master-0 kubenswrapper[38936]: I0216 21:38:06.007256 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b95d794ff-8msjt"] Feb 16 21:38:06.032972 master-0 kubenswrapper[38936]: I0216 21:38:06.031465 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-scripts" (OuterVolumeSpecName: "scripts") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:06.032972 master-0 kubenswrapper[38936]: I0216 21:38:06.032363 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9037d3ef-953a-4af9-9c81-d94587ee2d9d-kube-api-access-7snx8" (OuterVolumeSpecName: "kube-api-access-7snx8") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "kube-api-access-7snx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:06.043956 master-0 kubenswrapper[38936]: I0216 21:38:06.043884 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:06.095836 master-0 kubenswrapper[38936]: I0216 21:38:06.095780 38936 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.095836 master-0 kubenswrapper[38936]: I0216 21:38:06.095835 38936 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-dev\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.096141 master-0 kubenswrapper[38936]: I0216 21:38:06.095854 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7snx8\" (UniqueName: \"kubernetes.io/projected/9037d3ef-953a-4af9-9c81-d94587ee2d9d-kube-api-access-7snx8\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.096141 master-0 kubenswrapper[38936]: I0216 21:38:06.095872 38936 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.096141 master-0 kubenswrapper[38936]: I0216 21:38:06.095887 38936 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.096141 master-0 kubenswrapper[38936]: I0216 21:38:06.095903 38936 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.096141 master-0 kubenswrapper[38936]: I0216 21:38:06.095916 38936 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-run\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.096141 master-0 kubenswrapper[38936]: I0216 21:38:06.095928 38936 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.096141 master-0 kubenswrapper[38936]: I0216 21:38:06.095941 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.096141 master-0 kubenswrapper[38936]: I0216 21:38:06.095955 38936 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.096141 master-0 kubenswrapper[38936]: I0216 21:38:06.095970 38936 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.096141 master-0 kubenswrapper[38936]: I0216 21:38:06.095985 38936 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.096141 master-0 kubenswrapper[38936]: I0216 21:38:06.095999 38936 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9037d3ef-953a-4af9-9c81-d94587ee2d9d-sys\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.128697 master-0 kubenswrapper[38936]: I0216 21:38:06.128598 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:06.157916 master-0 kubenswrapper[38936]: I0216 21:38:06.157695 38936 generic.go:334] "Generic (PLEG): container finished" podID="f85d31c9-7303-4e30-ba85-3362b5828482" containerID="5a346f4089f59d2f8cd1e264c949e7170e19b53d64da5e4cb9f5bb60bd2ba184" exitCode=0 Feb 16 21:38:06.157916 master-0 kubenswrapper[38936]: I0216 21:38:06.157736 38936 generic.go:334] "Generic (PLEG): container finished" podID="f85d31c9-7303-4e30-ba85-3362b5828482" containerID="83e301f2617e8ccaa66a653151eeea709b4a60eac21c68ec1ab0323ee8fb54b7" exitCode=0 Feb 16 21:38:06.157916 master-0 kubenswrapper[38936]: I0216 21:38:06.157776 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-scheduler-0" event={"ID":"f85d31c9-7303-4e30-ba85-3362b5828482","Type":"ContainerDied","Data":"5a346f4089f59d2f8cd1e264c949e7170e19b53d64da5e4cb9f5bb60bd2ba184"} Feb 16 21:38:06.157916 master-0 kubenswrapper[38936]: I0216 21:38:06.157807 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-scheduler-0" event={"ID":"f85d31c9-7303-4e30-ba85-3362b5828482","Type":"ContainerDied","Data":"83e301f2617e8ccaa66a653151eeea709b4a60eac21c68ec1ab0323ee8fb54b7"} Feb 16 21:38:06.170952 master-0 kubenswrapper[38936]: I0216 21:38:06.169864 38936 generic.go:334] "Generic (PLEG): container finished" podID="0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" containerID="cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d" exitCode=0 Feb 16 21:38:06.170952 master-0 kubenswrapper[38936]: I0216 21:38:06.169907 38936 generic.go:334] "Generic (PLEG): container finished" podID="0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" containerID="d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b" exitCode=0 Feb 16 21:38:06.170952 master-0 kubenswrapper[38936]: I0216 21:38:06.169954 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-backup-0" event={"ID":"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c","Type":"ContainerDied","Data":"cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d"} Feb 16 21:38:06.170952 master-0 kubenswrapper[38936]: I0216 21:38:06.169983 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-backup-0" event={"ID":"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c","Type":"ContainerDied","Data":"d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b"} Feb 16 21:38:06.170952 master-0 kubenswrapper[38936]: I0216 21:38:06.169995 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-backup-0" event={"ID":"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c","Type":"ContainerDied","Data":"ac1875bcb6ea2772142ff3144a05f3ee6bf1e33aaca6ed1e3b7abfd2ae857265"} Feb 16 21:38:06.170952 master-0 kubenswrapper[38936]: I0216 21:38:06.170010 38936 scope.go:117] "RemoveContainer" containerID="cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d" Feb 16 21:38:06.170952 master-0 kubenswrapper[38936]: I0216 21:38:06.170138 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:06.175088 master-0 kubenswrapper[38936]: I0216 21:38:06.174635 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" event={"ID":"9037d3ef-953a-4af9-9c81-d94587ee2d9d","Type":"ContainerDied","Data":"abfd1e0e907ef04369a8a7966dc5afff24f887cc8c61e8429c90f4a11887f4af"} Feb 16 21:38:06.175088 master-0 kubenswrapper[38936]: I0216 21:38:06.174722 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:06.195662 master-0 kubenswrapper[38936]: I0216 21:38:06.195571 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85df85647b-4lmvj" event={"ID":"28720828-7566-4fb7-a4ff-ac6e548d9408","Type":"ContainerStarted","Data":"cf5de38d88f3ad6d7c59afb5c6c5fcc1bafd6fa43797f11bac7fb4e596126f68"} Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.198826 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-lib-modules\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.198895 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khlsz\" (UniqueName: \"kubernetes.io/projected/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-kube-api-access-khlsz\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.198933 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-machine-id\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.198995 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-brick\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.199028 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-lib-cinder\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.199140 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-iscsi\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.199217 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-scripts\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.199244 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-sys\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.199309 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-combined-ca-bundle\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.199332 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-run\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.199376 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-dev\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.199419 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-cinder\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.199453 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.199494 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data-custom\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.199518 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-nvme\") pod \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\" (UID: \"0e3c57b4-ab51-4b5b-b63b-393e16d23d9c\") " Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.203044 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:06.203905 master-0 kubenswrapper[38936]: I0216 21:38:06.203123 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:06.210233 master-0 kubenswrapper[38936]: I0216 21:38:06.207892 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-sys" (OuterVolumeSpecName: "sys") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:06.210233 master-0 kubenswrapper[38936]: I0216 21:38:06.207951 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:06.210233 master-0 kubenswrapper[38936]: I0216 21:38:06.207976 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:06.210233 master-0 kubenswrapper[38936]: I0216 21:38:06.208002 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:06.210233 master-0 kubenswrapper[38936]: I0216 21:38:06.208032 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:06.210233 master-0 kubenswrapper[38936]: I0216 21:38:06.208724 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-dev" (OuterVolumeSpecName: "dev") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:06.214470 master-0 kubenswrapper[38936]: I0216 21:38:06.214385 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" event={"ID":"3182998b-e6c3-4733-a374-23e11d68c55a","Type":"ContainerStarted","Data":"70ffcc4a920e4f645cbe653881a0ac7da4a574d5649d6697114d5373a8762102"} Feb 16 21:38:06.214690 master-0 kubenswrapper[38936]: I0216 21:38:06.214633 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-run" (OuterVolumeSpecName: "run") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:06.214852 master-0 kubenswrapper[38936]: I0216 21:38:06.214796 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:06.220487 master-0 kubenswrapper[38936]: I0216 21:38:06.220447 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" event={"ID":"f3fc7857-f230-4a40-8fb6-9b01dd29c502","Type":"ContainerStarted","Data":"f5a7949b8b28c9fbb56bff868799d507d2d88e4ebe356c9d4ab71b43373faf0c"} Feb 16 21:38:06.226540 master-0 kubenswrapper[38936]: I0216 21:38:06.225927 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" event={"ID":"cfcdcd18-dd01-45c8-afd4-ec72a986d582","Type":"ContainerStarted","Data":"763690b73967d597a5166973b56740806a488465bc1b37e6f47fb75c8e333e74"} Feb 16 21:38:06.229717 master-0 kubenswrapper[38936]: I0216 21:38:06.229638 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-q98pv" event={"ID":"53ca02e3-b979-4ed3-82e5-ce0850aa85f3","Type":"ContainerStarted","Data":"cfa3bc03cae78d0ffe7bb0f75032bc00bb3d74a44b15dc62c5c9cf40b132ff45"} Feb 16 21:38:06.234831 master-0 kubenswrapper[38936]: I0216 21:38:06.233198 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-9c692-api-0" Feb 16 21:38:06.266971 master-0 kubenswrapper[38936]: I0216 21:38:06.266783 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-create-q98pv" podStartSLOduration=3.266755442 podStartE2EDuration="3.266755442s" podCreationTimestamp="2026-02-16 21:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:06.254695965 +0000 UTC m=+916.606699327" watchObservedRunningTime="2026-02-16 21:38:06.266755442 +0000 UTC m=+916.618758804" Feb 16 21:38:06.274462 master-0 kubenswrapper[38936]: I0216 21:38:06.273807 38936 scope.go:117] "RemoveContainer" containerID="d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b" Feb 16 21:38:06.293735 master-0 kubenswrapper[38936]: I0216 21:38:06.290370 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-kube-api-access-khlsz" (OuterVolumeSpecName: "kube-api-access-khlsz") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "kube-api-access-khlsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:06.305954 master-0 kubenswrapper[38936]: I0216 21:38:06.305832 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:06.332242 master-0 kubenswrapper[38936]: I0216 21:38:06.332157 38936 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.333824 master-0 kubenswrapper[38936]: I0216 21:38:06.333799 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khlsz\" (UniqueName: \"kubernetes.io/projected/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-kube-api-access-khlsz\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.333928 master-0 kubenswrapper[38936]: I0216 21:38:06.333916 38936 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.334015 master-0 kubenswrapper[38936]: I0216 21:38:06.334005 38936 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.334105 master-0 kubenswrapper[38936]: I0216 21:38:06.334094 38936 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.334263 master-0 kubenswrapper[38936]: I0216 21:38:06.334241 38936 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.334342 master-0 kubenswrapper[38936]: I0216 21:38:06.334330 38936 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-sys\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.334453 master-0 kubenswrapper[38936]: I0216 21:38:06.334440 38936 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-run\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.334543 master-0 kubenswrapper[38936]: I0216 21:38:06.334530 38936 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-dev\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.334632 master-0 kubenswrapper[38936]: I0216 21:38:06.334618 38936 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.334719 master-0 kubenswrapper[38936]: I0216 21:38:06.334707 38936 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.334810 master-0 kubenswrapper[38936]: I0216 21:38:06.334798 38936 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.336188 master-0 kubenswrapper[38936]: I0216 21:38:06.336111 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-scripts" (OuterVolumeSpecName: "scripts") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:06.454797 master-0 kubenswrapper[38936]: I0216 21:38:06.449793 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.841358 master-0 kubenswrapper[38936]: I0216 21:38:06.841115 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:06.896702 master-0 kubenswrapper[38936]: I0216 21:38:06.896618 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:06.919394 master-0 kubenswrapper[38936]: I0216 21:38:06.919321 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.919394 master-0 kubenswrapper[38936]: I0216 21:38:06.919379 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:06.991291 master-0 kubenswrapper[38936]: I0216 21:38:06.991112 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data" (OuterVolumeSpecName: "config-data") pod "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" (UID: "0e3c57b4-ab51-4b5b-b63b-393e16d23d9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:07.022555 master-0 kubenswrapper[38936]: I0216 21:38:07.022513 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:07.025715 master-0 kubenswrapper[38936]: I0216 21:38:07.024838 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data" (OuterVolumeSpecName: "config-data") pod "9037d3ef-953a-4af9-9c81-d94587ee2d9d" (UID: "9037d3ef-953a-4af9-9c81-d94587ee2d9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:07.125087 master-0 kubenswrapper[38936]: I0216 21:38:07.124995 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9037d3ef-953a-4af9-9c81-d94587ee2d9d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:07.256943 master-0 kubenswrapper[38936]: I0216 21:38:07.256878 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:07.259133 master-0 kubenswrapper[38936]: I0216 21:38:07.258961 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" event={"ID":"f3fc7857-f230-4a40-8fb6-9b01dd29c502","Type":"ContainerStarted","Data":"1d34449ffd2482532e52f3621f11fbd435dae6703fb7224f529fcf752ed7e7bb"} Feb 16 21:38:07.262348 master-0 kubenswrapper[38936]: I0216 21:38:07.262300 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:07.262348 master-0 kubenswrapper[38936]: I0216 21:38:07.262310 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-scheduler-0" event={"ID":"f85d31c9-7303-4e30-ba85-3362b5828482","Type":"ContainerDied","Data":"f293490ada0dee53010f28c8d3e54f9dfb1b428fddbb3ea0ae23abff0ed6e021"} Feb 16 21:38:07.278201 master-0 kubenswrapper[38936]: I0216 21:38:07.278131 38936 generic.go:334] "Generic (PLEG): container finished" podID="53ca02e3-b979-4ed3-82e5-ce0850aa85f3" containerID="10a0a0181f28b207b613197e92cb8c759326c2803ec45064ba44ed084d153b2e" exitCode=0 Feb 16 21:38:07.278201 master-0 kubenswrapper[38936]: I0216 21:38:07.278198 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-q98pv" event={"ID":"53ca02e3-b979-4ed3-82e5-ce0850aa85f3","Type":"ContainerDied","Data":"10a0a0181f28b207b613197e92cb8c759326c2803ec45064ba44ed084d153b2e"} Feb 16 21:38:07.371959 master-0 kubenswrapper[38936]: I0216 21:38:07.371871 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxq7p\" (UniqueName: \"kubernetes.io/projected/f85d31c9-7303-4e30-ba85-3362b5828482-kube-api-access-kxq7p\") pod \"f85d31c9-7303-4e30-ba85-3362b5828482\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " Feb 16 21:38:07.372350 master-0 kubenswrapper[38936]: I0216 21:38:07.372124 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-scripts\") pod \"f85d31c9-7303-4e30-ba85-3362b5828482\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " Feb 16 21:38:07.372350 master-0 kubenswrapper[38936]: I0216 21:38:07.372247 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85d31c9-7303-4e30-ba85-3362b5828482-etc-machine-id\") pod \"f85d31c9-7303-4e30-ba85-3362b5828482\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " Feb 16 21:38:07.372350 master-0 kubenswrapper[38936]: I0216 21:38:07.372293 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data-custom\") pod \"f85d31c9-7303-4e30-ba85-3362b5828482\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " Feb 16 21:38:07.372350 master-0 kubenswrapper[38936]: I0216 21:38:07.372319 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data\") pod \"f85d31c9-7303-4e30-ba85-3362b5828482\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " Feb 16 21:38:07.374094 master-0 kubenswrapper[38936]: I0216 21:38:07.374019 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85d31c9-7303-4e30-ba85-3362b5828482-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f85d31c9-7303-4e30-ba85-3362b5828482" (UID: "f85d31c9-7303-4e30-ba85-3362b5828482"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:07.391693 master-0 kubenswrapper[38936]: I0216 21:38:07.384409 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f85d31c9-7303-4e30-ba85-3362b5828482" (UID: "f85d31c9-7303-4e30-ba85-3362b5828482"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:07.392570 master-0 kubenswrapper[38936]: I0216 21:38:07.392472 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-scripts" (OuterVolumeSpecName: "scripts") pod "f85d31c9-7303-4e30-ba85-3362b5828482" (UID: "f85d31c9-7303-4e30-ba85-3362b5828482"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:07.410988 master-0 kubenswrapper[38936]: I0216 21:38:07.408819 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85d31c9-7303-4e30-ba85-3362b5828482-kube-api-access-kxq7p" (OuterVolumeSpecName: "kube-api-access-kxq7p") pod "f85d31c9-7303-4e30-ba85-3362b5828482" (UID: "f85d31c9-7303-4e30-ba85-3362b5828482"). InnerVolumeSpecName "kube-api-access-kxq7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:07.410988 master-0 kubenswrapper[38936]: I0216 21:38:07.409816 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" podStartSLOduration=4.409793997 podStartE2EDuration="4.409793997s" podCreationTimestamp="2026-02-16 21:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:07.306352023 +0000 UTC m=+917.658355385" watchObservedRunningTime="2026-02-16 21:38:07.409793997 +0000 UTC m=+917.761797359" Feb 16 21:38:07.507114 master-0 kubenswrapper[38936]: I0216 21:38:07.504622 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9c692-backup-0"] Feb 16 21:38:07.531844 master-0 kubenswrapper[38936]: I0216 21:38:07.530511 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-combined-ca-bundle\") pod \"f85d31c9-7303-4e30-ba85-3362b5828482\" (UID: \"f85d31c9-7303-4e30-ba85-3362b5828482\") " Feb 16 21:38:07.537700 master-0 kubenswrapper[38936]: I0216 21:38:07.534336 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxq7p\" (UniqueName: \"kubernetes.io/projected/f85d31c9-7303-4e30-ba85-3362b5828482-kube-api-access-kxq7p\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:07.537700 master-0 kubenswrapper[38936]: I0216 21:38:07.535069 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:07.537700 master-0 kubenswrapper[38936]: I0216 21:38:07.535085 38936 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f85d31c9-7303-4e30-ba85-3362b5828482-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:07.537700 master-0 kubenswrapper[38936]: I0216 21:38:07.535096 38936 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:07.569136 master-0 kubenswrapper[38936]: I0216 21:38:07.569032 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9c692-backup-0"] Feb 16 21:38:07.601063 master-0 kubenswrapper[38936]: I0216 21:38:07.600937 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f85d31c9-7303-4e30-ba85-3362b5828482" (UID: "f85d31c9-7303-4e30-ba85-3362b5828482"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:07.610329 master-0 kubenswrapper[38936]: I0216 21:38:07.609862 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9c692-volume-lvm-iscsi-0"] Feb 16 21:38:07.629691 master-0 kubenswrapper[38936]: I0216 21:38:07.629611 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9c692-backup-0"] Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: E0216 21:38:07.632034 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d31c9-7303-4e30-ba85-3362b5828482" containerName="probe" Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: I0216 21:38:07.632080 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d31c9-7303-4e30-ba85-3362b5828482" containerName="probe" Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: E0216 21:38:07.632160 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9037d3ef-953a-4af9-9c81-d94587ee2d9d" containerName="cinder-volume" Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: I0216 21:38:07.632185 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="9037d3ef-953a-4af9-9c81-d94587ee2d9d" containerName="cinder-volume" Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: E0216 21:38:07.632195 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d31c9-7303-4e30-ba85-3362b5828482" containerName="cinder-scheduler" Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: I0216 21:38:07.632204 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d31c9-7303-4e30-ba85-3362b5828482" containerName="cinder-scheduler" Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: E0216 21:38:07.632237 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" containerName="cinder-backup" Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: I0216 21:38:07.632247 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" containerName="cinder-backup" Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: E0216 21:38:07.632278 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" containerName="probe" Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: I0216 21:38:07.632290 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" containerName="probe" Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: E0216 21:38:07.632322 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9037d3ef-953a-4af9-9c81-d94587ee2d9d" containerName="probe" Feb 16 21:38:07.632355 master-0 kubenswrapper[38936]: I0216 21:38:07.632331 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="9037d3ef-953a-4af9-9c81-d94587ee2d9d" containerName="probe" Feb 16 21:38:07.632982 master-0 kubenswrapper[38936]: I0216 21:38:07.632944 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" containerName="probe" Feb 16 21:38:07.633043 master-0 kubenswrapper[38936]: I0216 21:38:07.633022 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d31c9-7303-4e30-ba85-3362b5828482" containerName="probe" Feb 16 21:38:07.633082 master-0 kubenswrapper[38936]: I0216 21:38:07.633062 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="9037d3ef-953a-4af9-9c81-d94587ee2d9d" containerName="cinder-volume" Feb 16 21:38:07.633112 master-0 kubenswrapper[38936]: I0216 21:38:07.633090 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" containerName="cinder-backup" Feb 16 21:38:07.633112 master-0 kubenswrapper[38936]: I0216 21:38:07.633105 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d31c9-7303-4e30-ba85-3362b5828482" containerName="cinder-scheduler" Feb 16 21:38:07.633171 master-0 kubenswrapper[38936]: I0216 21:38:07.633127 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="9037d3ef-953a-4af9-9c81-d94587ee2d9d" containerName="probe" Feb 16 21:38:07.636090 master-0 kubenswrapper[38936]: I0216 21:38:07.635872 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.638007 master-0 kubenswrapper[38936]: I0216 21:38:07.637920 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:07.642799 master-0 kubenswrapper[38936]: I0216 21:38:07.641206 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-backup-config-data" Feb 16 21:38:07.670569 master-0 kubenswrapper[38936]: I0216 21:38:07.670519 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9c692-volume-lvm-iscsi-0"] Feb 16 21:38:07.701237 master-0 kubenswrapper[38936]: I0216 21:38:07.701159 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-backup-0"] Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.740430 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-var-locks-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.740595 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-config-data\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.740632 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-etc-machine-id\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.740814 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-config-data-custom\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.740894 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-lib-modules\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.740944 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-var-locks-brick\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.740966 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-combined-ca-bundle\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.741037 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-sys\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.741189 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-etc-iscsi\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.741249 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-etc-nvme\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.741296 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-scripts\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.741335 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-var-lib-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.741354 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-dev\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.741470 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk2zs\" (UniqueName: \"kubernetes.io/projected/c031566b-e048-44f7-9177-f1c1e05f4295-kube-api-access-bk2zs\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.741685 master-0 kubenswrapper[38936]: I0216 21:38:07.741616 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-run\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.753008 master-0 kubenswrapper[38936]: I0216 21:38:07.751549 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data" (OuterVolumeSpecName: "config-data") pod "f85d31c9-7303-4e30-ba85-3362b5828482" (UID: "f85d31c9-7303-4e30-ba85-3362b5828482"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:07.768585 master-0 kubenswrapper[38936]: I0216 21:38:07.768464 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9c692-volume-lvm-iscsi-0"] Feb 16 21:38:07.771675 master-0 kubenswrapper[38936]: I0216 21:38:07.771617 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.774063 master-0 kubenswrapper[38936]: I0216 21:38:07.773996 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-volume-lvm-iscsi-config-data" Feb 16 21:38:07.800517 master-0 kubenswrapper[38936]: I0216 21:38:07.800473 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-volume-lvm-iscsi-0"] Feb 16 21:38:07.832178 master-0 kubenswrapper[38936]: I0216 21:38:07.832094 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-6d6dfb9f68-58l7d"] Feb 16 21:38:07.834925 master-0 kubenswrapper[38936]: I0216 21:38:07.834876 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:07.838858 master-0 kubenswrapper[38936]: I0216 21:38:07.838820 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Feb 16 21:38:07.839187 master-0 kubenswrapper[38936]: I0216 21:38:07.839152 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Feb 16 21:38:07.843807 master-0 kubenswrapper[38936]: I0216 21:38:07.843714 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-config-data\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.843807 master-0 kubenswrapper[38936]: I0216 21:38:07.843769 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-etc-machine-id\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.843807 master-0 kubenswrapper[38936]: I0216 21:38:07.843799 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-combined-ca-bundle\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.844098 master-0 kubenswrapper[38936]: I0216 21:38:07.843821 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-var-locks-brick\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.844098 master-0 kubenswrapper[38936]: I0216 21:38:07.843843 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-config-data-custom\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.844098 master-0 kubenswrapper[38936]: I0216 21:38:07.843864 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-dev\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.844098 master-0 kubenswrapper[38936]: I0216 21:38:07.843950 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-lib-modules\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.844098 master-0 kubenswrapper[38936]: I0216 21:38:07.843997 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-etc-machine-id\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.844098 master-0 kubenswrapper[38936]: I0216 21:38:07.844093 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-var-locks-brick\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.844369 master-0 kubenswrapper[38936]: I0216 21:38:07.844201 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-combined-ca-bundle\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.844416 master-0 kubenswrapper[38936]: I0216 21:38:07.844370 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-etc-machine-id\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.844474 master-0 kubenswrapper[38936]: I0216 21:38:07.844453 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-var-locks-brick\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.844609 master-0 kubenswrapper[38936]: I0216 21:38:07.844515 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-lib-modules\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.845483 master-0 kubenswrapper[38936]: I0216 21:38:07.845416 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-sys\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.845483 master-0 kubenswrapper[38936]: I0216 21:38:07.845466 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9v2v\" (UniqueName: \"kubernetes.io/projected/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-kube-api-access-t9v2v\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.845579 master-0 kubenswrapper[38936]: I0216 21:38:07.845531 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-sys\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.845579 master-0 kubenswrapper[38936]: I0216 21:38:07.845562 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-lib-modules\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.845686 master-0 kubenswrapper[38936]: I0216 21:38:07.845646 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-etc-iscsi\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.845728 master-0 kubenswrapper[38936]: I0216 21:38:07.845698 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-var-locks-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.845784 master-0 kubenswrapper[38936]: I0216 21:38:07.845763 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-etc-nvme\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.845922 master-0 kubenswrapper[38936]: I0216 21:38:07.845865 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-scripts\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.846003 master-0 kubenswrapper[38936]: I0216 21:38:07.845941 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-etc-iscsi\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.846003 master-0 kubenswrapper[38936]: I0216 21:38:07.845986 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-var-lib-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.846152 master-0 kubenswrapper[38936]: I0216 21:38:07.846010 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-config-data-custom\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.846152 master-0 kubenswrapper[38936]: I0216 21:38:07.846048 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-dev\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.846152 master-0 kubenswrapper[38936]: I0216 21:38:07.846096 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk2zs\" (UniqueName: \"kubernetes.io/projected/c031566b-e048-44f7-9177-f1c1e05f4295-kube-api-access-bk2zs\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.846152 master-0 kubenswrapper[38936]: I0216 21:38:07.846135 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-etc-nvme\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.846344 master-0 kubenswrapper[38936]: I0216 21:38:07.846180 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-config-data\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.846344 master-0 kubenswrapper[38936]: I0216 21:38:07.846205 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-run\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.846344 master-0 kubenswrapper[38936]: I0216 21:38:07.846233 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-var-lib-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.846753 master-0 kubenswrapper[38936]: I0216 21:38:07.846695 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-run\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.846842 master-0 kubenswrapper[38936]: I0216 21:38:07.846773 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-var-locks-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.846842 master-0 kubenswrapper[38936]: I0216 21:38:07.846796 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-scripts\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.846971 master-0 kubenswrapper[38936]: I0216 21:38:07.846922 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85d31c9-7303-4e30-ba85-3362b5828482-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:07.847047 master-0 kubenswrapper[38936]: I0216 21:38:07.847001 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-run\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.847142 master-0 kubenswrapper[38936]: I0216 21:38:07.847032 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-sys\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.847276 master-0 kubenswrapper[38936]: I0216 21:38:07.847245 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-etc-iscsi\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.847319 master-0 kubenswrapper[38936]: I0216 21:38:07.847273 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-dev\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.847319 master-0 kubenswrapper[38936]: I0216 21:38:07.847294 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-var-lib-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.847408 master-0 kubenswrapper[38936]: I0216 21:38:07.847331 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-var-locks-cinder\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.847408 master-0 kubenswrapper[38936]: I0216 21:38:07.847330 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c031566b-e048-44f7-9177-f1c1e05f4295-etc-nvme\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.849880 master-0 kubenswrapper[38936]: I0216 21:38:07.849817 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-config-data\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.850202 master-0 kubenswrapper[38936]: I0216 21:38:07.850131 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-combined-ca-bundle\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.861414 master-0 kubenswrapper[38936]: I0216 21:38:07.861304 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-config-data-custom\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.866792 master-0 kubenswrapper[38936]: I0216 21:38:07.866736 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c031566b-e048-44f7-9177-f1c1e05f4295-scripts\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.867012 master-0 kubenswrapper[38936]: I0216 21:38:07.866953 38936 scope.go:117] "RemoveContainer" containerID="cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d" Feb 16 21:38:07.867236 master-0 kubenswrapper[38936]: I0216 21:38:07.867171 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6d6dfb9f68-58l7d"] Feb 16 21:38:07.867549 master-0 kubenswrapper[38936]: E0216 21:38:07.867518 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d\": container with ID starting with cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d not found: ID does not exist" containerID="cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d" Feb 16 21:38:07.867678 master-0 kubenswrapper[38936]: I0216 21:38:07.867557 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d"} err="failed to get container status \"cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d\": rpc error: code = NotFound desc = could not find container \"cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d\": container with ID starting with cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d not found: ID does not exist" Feb 16 21:38:07.867678 master-0 kubenswrapper[38936]: I0216 21:38:07.867583 38936 scope.go:117] "RemoveContainer" containerID="d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b" Feb 16 21:38:07.869590 master-0 kubenswrapper[38936]: E0216 21:38:07.869564 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b\": container with ID starting with d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b not found: ID does not exist" containerID="d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b" Feb 16 21:38:07.869641 master-0 kubenswrapper[38936]: I0216 21:38:07.869593 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b"} err="failed to get container status \"d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b\": rpc error: code = NotFound desc = could not find container \"d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b\": container with ID starting with d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b not found: ID does not exist" Feb 16 21:38:07.869641 master-0 kubenswrapper[38936]: I0216 21:38:07.869607 38936 scope.go:117] "RemoveContainer" containerID="cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d" Feb 16 21:38:07.870284 master-0 kubenswrapper[38936]: I0216 21:38:07.870200 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d"} err="failed to get container status \"cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d\": rpc error: code = NotFound desc = could not find container \"cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d\": container with ID starting with cb66e7b9ff0103a2facbfd0ee9efd49c255aede2f96c75eea1f74248ee6db88d not found: ID does not exist" Feb 16 21:38:07.870284 master-0 kubenswrapper[38936]: I0216 21:38:07.870268 38936 scope.go:117] "RemoveContainer" containerID="d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b" Feb 16 21:38:07.872065 master-0 kubenswrapper[38936]: I0216 21:38:07.872003 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b"} err="failed to get container status \"d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b\": rpc error: code = NotFound desc = could not find container \"d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b\": container with ID starting with d6e5959a5090ef60ef7f307ee014f74098c36a43f3d176058041b73da8ecee9b not found: ID does not exist" Feb 16 21:38:07.872065 master-0 kubenswrapper[38936]: I0216 21:38:07.872056 38936 scope.go:117] "RemoveContainer" containerID="ba2624715ebf9efd2ea92e95d3e1e4f500aa54e8dddab12ef65d043f0dbca2d7" Feb 16 21:38:07.872748 master-0 kubenswrapper[38936]: I0216 21:38:07.872628 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk2zs\" (UniqueName: \"kubernetes.io/projected/c031566b-e048-44f7-9177-f1c1e05f4295-kube-api-access-bk2zs\") pod \"cinder-9c692-backup-0\" (UID: \"c031566b-e048-44f7-9177-f1c1e05f4295\") " pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:07.895190 master-0 kubenswrapper[38936]: I0216 21:38:07.895110 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3c57b4-ab51-4b5b-b63b-393e16d23d9c" path="/var/lib/kubelet/pods/0e3c57b4-ab51-4b5b-b63b-393e16d23d9c/volumes" Feb 16 21:38:07.896028 master-0 kubenswrapper[38936]: I0216 21:38:07.895986 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9037d3ef-953a-4af9-9c81-d94587ee2d9d" path="/var/lib/kubelet/pods/9037d3ef-953a-4af9-9c81-d94587ee2d9d/volumes" Feb 16 21:38:07.896671 master-0 kubenswrapper[38936]: I0216 21:38:07.896634 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b584e233-74f9-47f5-99e2-2fa42826ac27" path="/var/lib/kubelet/pods/b584e233-74f9-47f5-99e2-2fa42826ac27/volumes" Feb 16 21:38:07.950033 master-0 kubenswrapper[38936]: I0216 21:38:07.949832 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-scripts\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.950033 master-0 kubenswrapper[38936]: I0216 21:38:07.949932 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-config-data-custom\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950144 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-public-tls-certs\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950182 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-config-data\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950265 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-combined-ca-bundle\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950307 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-var-locks-brick\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950334 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-dev\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950473 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-etc-machine-id\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950514 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-scripts\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950607 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-sys\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950645 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9v2v\" (UniqueName: \"kubernetes.io/projected/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-kube-api-access-t9v2v\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950698 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-lib-modules\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950718 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df8afaf9-42b4-4639-855e-f6cf99c985bf-logs\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950764 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdqwf\" (UniqueName: \"kubernetes.io/projected/df8afaf9-42b4-4639-855e-f6cf99c985bf-kube-api-access-fdqwf\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950790 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-var-locks-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950909 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-etc-iscsi\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950938 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-combined-ca-bundle\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.950959 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-config-data-custom\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.951013 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/df8afaf9-42b4-4639-855e-f6cf99c985bf-config-data-merged\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.951032 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/df8afaf9-42b4-4639-855e-f6cf99c985bf-etc-podinfo\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.951136 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-sys\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.951299 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-lib-modules\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.951619 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-etc-machine-id\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952241 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-etc-nvme\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952247 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-var-locks-brick\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952329 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-etc-iscsi\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952411 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-dev\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952475 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-etc-nvme\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952573 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-var-locks-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952633 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-config-data\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952683 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-run\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952707 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-internal-tls-certs\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952731 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-var-lib-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952858 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-run\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.952926 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-var-lib-cinder\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.956575 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-scripts\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.957163 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-combined-ca-bundle\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.957808 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-config-data-custom\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.972625 master-0 kubenswrapper[38936]: I0216 21:38:07.970883 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-config-data\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:07.980589 master-0 kubenswrapper[38936]: I0216 21:38:07.980556 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9v2v\" (UniqueName: \"kubernetes.io/projected/ef9f0999-cb38-43ec-98e8-c0ec09b4351b-kube-api-access-t9v2v\") pod \"cinder-9c692-volume-lvm-iscsi-0\" (UID: \"ef9f0999-cb38-43ec-98e8-c0ec09b4351b\") " pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:08.003063 master-0 kubenswrapper[38936]: I0216 21:38:08.002995 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9c692-scheduler-0"] Feb 16 21:38:08.017089 master-0 kubenswrapper[38936]: I0216 21:38:08.017044 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9c692-scheduler-0"] Feb 16 21:38:08.030982 master-0 kubenswrapper[38936]: I0216 21:38:08.030835 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9c692-scheduler-0"] Feb 16 21:38:08.034157 master-0 kubenswrapper[38936]: I0216 21:38:08.034132 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.036359 master-0 kubenswrapper[38936]: I0216 21:38:08.036323 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-scheduler-config-data" Feb 16 21:38:08.047490 master-0 kubenswrapper[38936]: I0216 21:38:08.047117 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:08.047490 master-0 kubenswrapper[38936]: I0216 21:38:08.047303 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-scheduler-0"] Feb 16 21:38:08.061785 master-0 kubenswrapper[38936]: I0216 21:38:08.061721 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-scripts\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.062157 master-0 kubenswrapper[38936]: I0216 21:38:08.062115 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df8afaf9-42b4-4639-855e-f6cf99c985bf-logs\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.062279 master-0 kubenswrapper[38936]: I0216 21:38:08.062262 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdqwf\" (UniqueName: \"kubernetes.io/projected/df8afaf9-42b4-4639-855e-f6cf99c985bf-kube-api-access-fdqwf\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.062502 master-0 kubenswrapper[38936]: I0216 21:38:08.062488 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-combined-ca-bundle\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.062773 master-0 kubenswrapper[38936]: I0216 21:38:08.062629 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/df8afaf9-42b4-4639-855e-f6cf99c985bf-etc-podinfo\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.062887 master-0 kubenswrapper[38936]: I0216 21:38:08.062873 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/df8afaf9-42b4-4639-855e-f6cf99c985bf-config-data-merged\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.063048 master-0 kubenswrapper[38936]: I0216 21:38:08.063031 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-internal-tls-certs\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.063225 master-0 kubenswrapper[38936]: I0216 21:38:08.063186 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df8afaf9-42b4-4639-855e-f6cf99c985bf-logs\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.063364 master-0 kubenswrapper[38936]: I0216 21:38:08.063212 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-config-data-custom\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.063485 master-0 kubenswrapper[38936]: I0216 21:38:08.063471 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-public-tls-certs\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.063626 master-0 kubenswrapper[38936]: I0216 21:38:08.063612 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-config-data\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.063876 master-0 kubenswrapper[38936]: I0216 21:38:08.063853 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/df8afaf9-42b4-4639-855e-f6cf99c985bf-config-data-merged\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.067490 master-0 kubenswrapper[38936]: I0216 21:38:08.067471 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-config-data\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.069148 master-0 kubenswrapper[38936]: I0216 21:38:08.069095 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-combined-ca-bundle\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.069340 master-0 kubenswrapper[38936]: I0216 21:38:08.069308 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-scripts\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.070737 master-0 kubenswrapper[38936]: I0216 21:38:08.070721 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-internal-tls-certs\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.071385 master-0 kubenswrapper[38936]: I0216 21:38:08.071344 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/df8afaf9-42b4-4639-855e-f6cf99c985bf-etc-podinfo\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.071952 master-0 kubenswrapper[38936]: I0216 21:38:08.071927 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-config-data-custom\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.072551 master-0 kubenswrapper[38936]: I0216 21:38:08.072534 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8afaf9-42b4-4639-855e-f6cf99c985bf-public-tls-certs\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.080422 master-0 kubenswrapper[38936]: I0216 21:38:08.080364 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdqwf\" (UniqueName: \"kubernetes.io/projected/df8afaf9-42b4-4639-855e-f6cf99c985bf-kube-api-access-fdqwf\") pod \"ironic-6d6dfb9f68-58l7d\" (UID: \"df8afaf9-42b4-4639-855e-f6cf99c985bf\") " pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.100541 master-0 kubenswrapper[38936]: I0216 21:38:08.100484 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:08.166681 master-0 kubenswrapper[38936]: I0216 21:38:08.166578 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-config-data\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.167394 master-0 kubenswrapper[38936]: I0216 21:38:08.167357 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-etc-machine-id\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.167749 master-0 kubenswrapper[38936]: I0216 21:38:08.167727 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-scripts\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.167967 master-0 kubenswrapper[38936]: I0216 21:38:08.167951 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-combined-ca-bundle\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.168244 master-0 kubenswrapper[38936]: I0216 21:38:08.168215 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f59w\" (UniqueName: \"kubernetes.io/projected/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-kube-api-access-9f59w\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.169158 master-0 kubenswrapper[38936]: I0216 21:38:08.168894 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-config-data-custom\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.177575 master-0 kubenswrapper[38936]: I0216 21:38:08.177495 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-44512dbe-c790-4488-972f-62c15620e662\" (UniqueName: \"kubernetes.io/csi/topolvm.io^150e29e3-d4ae-4987-ad7e-f808e7829436\") pod \"ironic-conductor-0\" (UID: \"37c815ef-1c3d-4b2a-b748-de04b8c4412c\") " pod="openstack/ironic-conductor-0" Feb 16 21:38:08.242699 master-0 kubenswrapper[38936]: I0216 21:38:08.242611 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:08.271712 master-0 kubenswrapper[38936]: I0216 21:38:08.271618 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-etc-machine-id\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.271941 master-0 kubenswrapper[38936]: I0216 21:38:08.271722 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-etc-machine-id\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.271941 master-0 kubenswrapper[38936]: I0216 21:38:08.271782 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-scripts\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.271941 master-0 kubenswrapper[38936]: I0216 21:38:08.271846 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-combined-ca-bundle\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.271941 master-0 kubenswrapper[38936]: I0216 21:38:08.271928 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f59w\" (UniqueName: \"kubernetes.io/projected/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-kube-api-access-9f59w\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.272084 master-0 kubenswrapper[38936]: I0216 21:38:08.271992 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-config-data-custom\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.272339 master-0 kubenswrapper[38936]: I0216 21:38:08.272192 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-config-data\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.276501 master-0 kubenswrapper[38936]: I0216 21:38:08.276457 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-combined-ca-bundle\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.277199 master-0 kubenswrapper[38936]: I0216 21:38:08.277159 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-scripts\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.277922 master-0 kubenswrapper[38936]: I0216 21:38:08.277757 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-config-data\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.277922 master-0 kubenswrapper[38936]: I0216 21:38:08.277861 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-config-data-custom\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.299221 master-0 kubenswrapper[38936]: I0216 21:38:08.299093 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f59w\" (UniqueName: \"kubernetes.io/projected/eaf5b21d-171c-46b8-bbc1-26f1af47aa0a-kube-api-access-9f59w\") pod \"cinder-9c692-scheduler-0\" (UID: \"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a\") " pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.299799 master-0 kubenswrapper[38936]: I0216 21:38:08.299745 38936 generic.go:334] "Generic (PLEG): container finished" podID="3182998b-e6c3-4733-a374-23e11d68c55a" containerID="807885708c9d21aa88b6175c7663291a4b386500b4d34f938b664a8823312a2f" exitCode=0 Feb 16 21:38:08.301211 master-0 kubenswrapper[38936]: I0216 21:38:08.301008 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" event={"ID":"3182998b-e6c3-4733-a374-23e11d68c55a","Type":"ContainerDied","Data":"807885708c9d21aa88b6175c7663291a4b386500b4d34f938b664a8823312a2f"} Feb 16 21:38:08.306036 master-0 kubenswrapper[38936]: I0216 21:38:08.305986 38936 generic.go:334] "Generic (PLEG): container finished" podID="f3fc7857-f230-4a40-8fb6-9b01dd29c502" containerID="1d34449ffd2482532e52f3621f11fbd435dae6703fb7224f529fcf752ed7e7bb" exitCode=0 Feb 16 21:38:08.306117 master-0 kubenswrapper[38936]: I0216 21:38:08.306052 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" event={"ID":"f3fc7857-f230-4a40-8fb6-9b01dd29c502","Type":"ContainerDied","Data":"1d34449ffd2482532e52f3621f11fbd435dae6703fb7224f529fcf752ed7e7bb"} Feb 16 21:38:08.375383 master-0 kubenswrapper[38936]: I0216 21:38:08.375306 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 16 21:38:08.376938 master-0 kubenswrapper[38936]: I0216 21:38:08.376903 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:08.933021 master-0 kubenswrapper[38936]: I0216 21:38:08.932951 38936 scope.go:117] "RemoveContainer" containerID="de58beeba240ff51ab416be9d35eb6bbffc3258b096e90d2d4f9ebb61a7b8240" Feb 16 21:38:09.009220 master-0 kubenswrapper[38936]: I0216 21:38:09.009151 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-q98pv" Feb 16 21:38:09.124610 master-0 kubenswrapper[38936]: I0216 21:38:09.124068 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plr62\" (UniqueName: \"kubernetes.io/projected/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-kube-api-access-plr62\") pod \"53ca02e3-b979-4ed3-82e5-ce0850aa85f3\" (UID: \"53ca02e3-b979-4ed3-82e5-ce0850aa85f3\") " Feb 16 21:38:09.124610 master-0 kubenswrapper[38936]: I0216 21:38:09.124325 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-operator-scripts\") pod \"53ca02e3-b979-4ed3-82e5-ce0850aa85f3\" (UID: \"53ca02e3-b979-4ed3-82e5-ce0850aa85f3\") " Feb 16 21:38:09.130391 master-0 kubenswrapper[38936]: I0216 21:38:09.130289 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53ca02e3-b979-4ed3-82e5-ce0850aa85f3" (UID: "53ca02e3-b979-4ed3-82e5-ce0850aa85f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:09.136712 master-0 kubenswrapper[38936]: I0216 21:38:09.134974 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-kube-api-access-plr62" (OuterVolumeSpecName: "kube-api-access-plr62") pod "53ca02e3-b979-4ed3-82e5-ce0850aa85f3" (UID: "53ca02e3-b979-4ed3-82e5-ce0850aa85f3"). InnerVolumeSpecName "kube-api-access-plr62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:09.149724 master-0 kubenswrapper[38936]: I0216 21:38:09.149591 38936 scope.go:117] "RemoveContainer" containerID="5a346f4089f59d2f8cd1e264c949e7170e19b53d64da5e4cb9f5bb60bd2ba184" Feb 16 21:38:09.182422 master-0 kubenswrapper[38936]: I0216 21:38:09.182309 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plr62\" (UniqueName: \"kubernetes.io/projected/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-kube-api-access-plr62\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:09.182422 master-0 kubenswrapper[38936]: I0216 21:38:09.182398 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53ca02e3-b979-4ed3-82e5-ce0850aa85f3-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:09.291437 master-0 kubenswrapper[38936]: I0216 21:38:09.290850 38936 scope.go:117] "RemoveContainer" containerID="83e301f2617e8ccaa66a653151eeea709b4a60eac21c68ec1ab0323ee8fb54b7" Feb 16 21:38:09.396928 master-0 kubenswrapper[38936]: I0216 21:38:09.396556 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-q98pv" event={"ID":"53ca02e3-b979-4ed3-82e5-ce0850aa85f3","Type":"ContainerDied","Data":"cfa3bc03cae78d0ffe7bb0f75032bc00bb3d74a44b15dc62c5c9cf40b132ff45"} Feb 16 21:38:09.396928 master-0 kubenswrapper[38936]: I0216 21:38:09.396630 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfa3bc03cae78d0ffe7bb0f75032bc00bb3d74a44b15dc62c5c9cf40b132ff45" Feb 16 21:38:09.402725 master-0 kubenswrapper[38936]: I0216 21:38:09.402089 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-q98pv" Feb 16 21:38:09.908572 master-0 kubenswrapper[38936]: I0216 21:38:09.908408 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85d31c9-7303-4e30-ba85-3362b5828482" path="/var/lib/kubelet/pods/f85d31c9-7303-4e30-ba85-3362b5828482/volumes" Feb 16 21:38:09.920211 master-0 kubenswrapper[38936]: I0216 21:38:09.920157 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:38:09.927938 master-0 kubenswrapper[38936]: I0216 21:38:09.927873 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:38:10.267817 master-0 kubenswrapper[38936]: I0216 21:38:10.267599 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6d6dfb9f68-58l7d"] Feb 16 21:38:10.299612 master-0 kubenswrapper[38936]: I0216 21:38:10.298879 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-volume-lvm-iscsi-0"] Feb 16 21:38:10.447731 master-0 kubenswrapper[38936]: I0216 21:38:10.447638 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5675994476-8qnnd"] Feb 16 21:38:10.448323 master-0 kubenswrapper[38936]: E0216 21:38:10.448292 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ca02e3-b979-4ed3-82e5-ce0850aa85f3" containerName="mariadb-database-create" Feb 16 21:38:10.448323 master-0 kubenswrapper[38936]: I0216 21:38:10.448317 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ca02e3-b979-4ed3-82e5-ce0850aa85f3" containerName="mariadb-database-create" Feb 16 21:38:10.452673 master-0 kubenswrapper[38936]: I0216 21:38:10.448707 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ca02e3-b979-4ed3-82e5-ce0850aa85f3" containerName="mariadb-database-create" Feb 16 21:38:10.452673 master-0 kubenswrapper[38936]: I0216 21:38:10.450212 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.470103 master-0 kubenswrapper[38936]: I0216 21:38:10.469710 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" event={"ID":"3182998b-e6c3-4733-a374-23e11d68c55a","Type":"ContainerStarted","Data":"7170903cc1ded40ad20c722094c49391b2588dcbb4e36a259a46a5ad4dd802de"} Feb 16 21:38:10.471073 master-0 kubenswrapper[38936]: I0216 21:38:10.471029 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:10.476714 master-0 kubenswrapper[38936]: I0216 21:38:10.476218 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" event={"ID":"f3fc7857-f230-4a40-8fb6-9b01dd29c502","Type":"ContainerDied","Data":"f5a7949b8b28c9fbb56bff868799d507d2d88e4ebe356c9d4ab71b43373faf0c"} Feb 16 21:38:10.476714 master-0 kubenswrapper[38936]: I0216 21:38:10.476266 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5a7949b8b28c9fbb56bff868799d507d2d88e4ebe356c9d4ab71b43373faf0c" Feb 16 21:38:10.483272 master-0 kubenswrapper[38936]: I0216 21:38:10.482470 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5675994476-8qnnd"] Feb 16 21:38:10.483483 master-0 kubenswrapper[38936]: I0216 21:38:10.483333 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" event={"ID":"ef9f0999-cb38-43ec-98e8-c0ec09b4351b","Type":"ContainerStarted","Data":"63674eb9e38a14c1c2dfb22b87c970dea00745cfdc393e1bd486f4d656bbac57"} Feb 16 21:38:10.499579 master-0 kubenswrapper[38936]: I0216 21:38:10.488846 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d6dfb9f68-58l7d" event={"ID":"df8afaf9-42b4-4639-855e-f6cf99c985bf","Type":"ContainerStarted","Data":"b11851abff6e64dc5cc7ad86ff4c9b314f186789b167e0347a9e87a67de1d960"} Feb 16 21:38:10.499579 master-0 kubenswrapper[38936]: I0216 21:38:10.491612 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" event={"ID":"cfcdcd18-dd01-45c8-afd4-ec72a986d582","Type":"ContainerStarted","Data":"a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e"} Feb 16 21:38:10.499579 master-0 kubenswrapper[38936]: I0216 21:38:10.493018 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:10.503513 master-0 kubenswrapper[38936]: I0216 21:38:10.501509 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85df85647b-4lmvj" event={"ID":"28720828-7566-4fb7-a4ff-ac6e548d9408","Type":"ContainerStarted","Data":"eb29028f3c54d2a0e8cd40193476c5d4ad54902304945a2e128e9ce200884da7"} Feb 16 21:38:10.534712 master-0 kubenswrapper[38936]: I0216 21:38:10.534579 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beb6c329-ea6c-483c-9024-45d5e74f1a8b-logs\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.534969 master-0 kubenswrapper[38936]: I0216 21:38:10.534774 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcxmm\" (UniqueName: \"kubernetes.io/projected/beb6c329-ea6c-483c-9024-45d5e74f1a8b-kube-api-access-wcxmm\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.534969 master-0 kubenswrapper[38936]: I0216 21:38:10.534889 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-internal-tls-certs\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.534969 master-0 kubenswrapper[38936]: I0216 21:38:10.534956 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-config-data\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.535150 master-0 kubenswrapper[38936]: I0216 21:38:10.535098 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-combined-ca-bundle\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.535377 master-0 kubenswrapper[38936]: I0216 21:38:10.535355 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-scripts\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.535893 master-0 kubenswrapper[38936]: I0216 21:38:10.535867 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-public-tls-certs\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.608097 master-0 kubenswrapper[38936]: I0216 21:38:10.607836 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" podStartSLOduration=7.607804623 podStartE2EDuration="7.607804623s" podCreationTimestamp="2026-02-16 21:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:10.530631718 +0000 UTC m=+920.882635090" watchObservedRunningTime="2026-02-16 21:38:10.607804623 +0000 UTC m=+920.959807985" Feb 16 21:38:10.635132 master-0 kubenswrapper[38936]: I0216 21:38:10.634798 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" podStartSLOduration=4.115280701 podStartE2EDuration="7.634764351s" podCreationTimestamp="2026-02-16 21:38:03 +0000 UTC" firstStartedPulling="2026-02-16 21:38:05.475055685 +0000 UTC m=+915.827059047" lastFinishedPulling="2026-02-16 21:38:08.994539345 +0000 UTC m=+919.346542697" observedRunningTime="2026-02-16 21:38:10.569372074 +0000 UTC m=+920.921375436" watchObservedRunningTime="2026-02-16 21:38:10.634764351 +0000 UTC m=+920.986767713" Feb 16 21:38:10.652971 master-0 kubenswrapper[38936]: I0216 21:38:10.652705 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-scripts\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.653223 master-0 kubenswrapper[38936]: I0216 21:38:10.653118 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-public-tls-certs\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.653223 master-0 kubenswrapper[38936]: I0216 21:38:10.653167 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beb6c329-ea6c-483c-9024-45d5e74f1a8b-logs\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.653310 master-0 kubenswrapper[38936]: I0216 21:38:10.653223 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcxmm\" (UniqueName: \"kubernetes.io/projected/beb6c329-ea6c-483c-9024-45d5e74f1a8b-kube-api-access-wcxmm\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.653353 master-0 kubenswrapper[38936]: I0216 21:38:10.653306 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-internal-tls-certs\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.653353 master-0 kubenswrapper[38936]: I0216 21:38:10.653341 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-config-data\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.653436 master-0 kubenswrapper[38936]: I0216 21:38:10.653412 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-combined-ca-bundle\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.659674 master-0 kubenswrapper[38936]: I0216 21:38:10.657967 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-internal-tls-certs\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.659674 master-0 kubenswrapper[38936]: I0216 21:38:10.657997 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/beb6c329-ea6c-483c-9024-45d5e74f1a8b-logs\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.659674 master-0 kubenswrapper[38936]: I0216 21:38:10.658493 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-scripts\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.661155 master-0 kubenswrapper[38936]: I0216 21:38:10.661063 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-combined-ca-bundle\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.675704 master-0 kubenswrapper[38936]: I0216 21:38:10.671791 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-config-data\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.675704 master-0 kubenswrapper[38936]: I0216 21:38:10.673282 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-scheduler-0"] Feb 16 21:38:10.675704 master-0 kubenswrapper[38936]: I0216 21:38:10.675469 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcxmm\" (UniqueName: \"kubernetes.io/projected/beb6c329-ea6c-483c-9024-45d5e74f1a8b-kube-api-access-wcxmm\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.681861 master-0 kubenswrapper[38936]: I0216 21:38:10.679158 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/beb6c329-ea6c-483c-9024-45d5e74f1a8b-public-tls-certs\") pod \"placement-5675994476-8qnnd\" (UID: \"beb6c329-ea6c-483c-9024-45d5e74f1a8b\") " pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:10.747155 master-0 kubenswrapper[38936]: W0216 21:38:10.747071 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaf5b21d_171c_46b8_bbc1_26f1af47aa0a.slice/crio-8426e5eaa226ffacc06320b1ad77108b4a64e54b82a3c88ea936667cae5e9ac2 WatchSource:0}: Error finding container 8426e5eaa226ffacc06320b1ad77108b4a64e54b82a3c88ea936667cae5e9ac2: Status 404 returned error can't find the container with id 8426e5eaa226ffacc06320b1ad77108b4a64e54b82a3c88ea936667cae5e9ac2 Feb 16 21:38:10.754464 master-0 kubenswrapper[38936]: I0216 21:38:10.754354 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" Feb 16 21:38:10.764601 master-0 kubenswrapper[38936]: I0216 21:38:10.764517 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 16 21:38:10.764836 master-0 kubenswrapper[38936]: W0216 21:38:10.764793 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37c815ef_1c3d_4b2a_b748_de04b8c4412c.slice/crio-b2debf5e5844870aceb4e14a89c8be8377e9a86f3e7d062b283385f5d6b09ee3 WatchSource:0}: Error finding container b2debf5e5844870aceb4e14a89c8be8377e9a86f3e7d062b283385f5d6b09ee3: Status 404 returned error can't find the container with id b2debf5e5844870aceb4e14a89c8be8377e9a86f3e7d062b283385f5d6b09ee3 Feb 16 21:38:10.858963 master-0 kubenswrapper[38936]: I0216 21:38:10.858895 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3fc7857-f230-4a40-8fb6-9b01dd29c502-operator-scripts\") pod \"f3fc7857-f230-4a40-8fb6-9b01dd29c502\" (UID: \"f3fc7857-f230-4a40-8fb6-9b01dd29c502\") " Feb 16 21:38:10.859207 master-0 kubenswrapper[38936]: I0216 21:38:10.859066 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkgd2\" (UniqueName: \"kubernetes.io/projected/f3fc7857-f230-4a40-8fb6-9b01dd29c502-kube-api-access-qkgd2\") pod \"f3fc7857-f230-4a40-8fb6-9b01dd29c502\" (UID: \"f3fc7857-f230-4a40-8fb6-9b01dd29c502\") " Feb 16 21:38:10.859587 master-0 kubenswrapper[38936]: I0216 21:38:10.859530 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3fc7857-f230-4a40-8fb6-9b01dd29c502-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3fc7857-f230-4a40-8fb6-9b01dd29c502" (UID: "f3fc7857-f230-4a40-8fb6-9b01dd29c502"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:10.863378 master-0 kubenswrapper[38936]: I0216 21:38:10.863328 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fc7857-f230-4a40-8fb6-9b01dd29c502-kube-api-access-qkgd2" (OuterVolumeSpecName: "kube-api-access-qkgd2") pod "f3fc7857-f230-4a40-8fb6-9b01dd29c502" (UID: "f3fc7857-f230-4a40-8fb6-9b01dd29c502"). InnerVolumeSpecName "kube-api-access-qkgd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:10.962842 master-0 kubenswrapper[38936]: I0216 21:38:10.962784 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3fc7857-f230-4a40-8fb6-9b01dd29c502-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:10.962842 master-0 kubenswrapper[38936]: I0216 21:38:10.962829 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkgd2\" (UniqueName: \"kubernetes.io/projected/f3fc7857-f230-4a40-8fb6-9b01dd29c502-kube-api-access-qkgd2\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:11.008508 master-0 kubenswrapper[38936]: I0216 21:38:11.008337 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-95b8b778-clhph" Feb 16 21:38:11.053893 master-0 kubenswrapper[38936]: I0216 21:38:11.053400 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:11.540140 master-0 kubenswrapper[38936]: I0216 21:38:11.540020 38936 generic.go:334] "Generic (PLEG): container finished" podID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerID="eb29028f3c54d2a0e8cd40193476c5d4ad54902304945a2e128e9ce200884da7" exitCode=0 Feb 16 21:38:11.540875 master-0 kubenswrapper[38936]: I0216 21:38:11.540149 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85df85647b-4lmvj" event={"ID":"28720828-7566-4fb7-a4ff-ac6e548d9408","Type":"ContainerDied","Data":"eb29028f3c54d2a0e8cd40193476c5d4ad54902304945a2e128e9ce200884da7"} Feb 16 21:38:11.543479 master-0 kubenswrapper[38936]: I0216 21:38:11.543411 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"37c815ef-1c3d-4b2a-b748-de04b8c4412c","Type":"ContainerStarted","Data":"b2debf5e5844870aceb4e14a89c8be8377e9a86f3e7d062b283385f5d6b09ee3"} Feb 16 21:38:11.549441 master-0 kubenswrapper[38936]: I0216 21:38:11.549380 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" event={"ID":"ef9f0999-cb38-43ec-98e8-c0ec09b4351b","Type":"ContainerStarted","Data":"59726fbde0dbe468efc3fefb69586397235d13f78eafaefcb002ec743d586a50"} Feb 16 21:38:11.555148 master-0 kubenswrapper[38936]: I0216 21:38:11.555064 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-scheduler-0" event={"ID":"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a","Type":"ContainerStarted","Data":"8426e5eaa226ffacc06320b1ad77108b4a64e54b82a3c88ea936667cae5e9ac2"} Feb 16 21:38:11.555280 master-0 kubenswrapper[38936]: I0216 21:38:11.555167 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" Feb 16 21:38:11.674268 master-0 kubenswrapper[38936]: I0216 21:38:11.674207 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5675994476-8qnnd"] Feb 16 21:38:11.680630 master-0 kubenswrapper[38936]: W0216 21:38:11.680582 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeb6c329_ea6c_483c_9024_45d5e74f1a8b.slice/crio-21f41765abdde52a688581ec83b90a98d655e6dc30ede7db17640dd54089cca7 WatchSource:0}: Error finding container 21f41765abdde52a688581ec83b90a98d655e6dc30ede7db17640dd54089cca7: Status 404 returned error can't find the container with id 21f41765abdde52a688581ec83b90a98d655e6dc30ede7db17640dd54089cca7 Feb 16 21:38:11.785849 master-0 kubenswrapper[38936]: I0216 21:38:11.785754 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-backup-0"] Feb 16 21:38:11.964674 master-0 kubenswrapper[38936]: I0216 21:38:11.963078 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 21:38:11.964674 master-0 kubenswrapper[38936]: E0216 21:38:11.963671 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fc7857-f230-4a40-8fb6-9b01dd29c502" containerName="mariadb-account-create-update" Feb 16 21:38:11.964674 master-0 kubenswrapper[38936]: I0216 21:38:11.963685 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fc7857-f230-4a40-8fb6-9b01dd29c502" containerName="mariadb-account-create-update" Feb 16 21:38:11.964674 master-0 kubenswrapper[38936]: I0216 21:38:11.963984 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fc7857-f230-4a40-8fb6-9b01dd29c502" containerName="mariadb-account-create-update" Feb 16 21:38:11.965069 master-0 kubenswrapper[38936]: I0216 21:38:11.964824 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:38:11.973677 master-0 kubenswrapper[38936]: I0216 21:38:11.967497 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 21:38:11.973677 master-0 kubenswrapper[38936]: I0216 21:38:11.968093 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 21:38:12.005712 master-0 kubenswrapper[38936]: I0216 21:38:11.990925 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 21:38:12.039261 master-0 kubenswrapper[38936]: I0216 21:38:12.039197 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.039394 master-0 kubenswrapper[38936]: I0216 21:38:12.039374 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.039454 master-0 kubenswrapper[38936]: I0216 21:38:12.039434 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config-secret\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.039676 master-0 kubenswrapper[38936]: I0216 21:38:12.039627 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2zhd\" (UniqueName: \"kubernetes.io/projected/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-kube-api-access-l2zhd\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.145070 master-0 kubenswrapper[38936]: I0216 21:38:12.144606 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2zhd\" (UniqueName: \"kubernetes.io/projected/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-kube-api-access-l2zhd\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.145070 master-0 kubenswrapper[38936]: I0216 21:38:12.144755 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.145070 master-0 kubenswrapper[38936]: I0216 21:38:12.144826 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.145070 master-0 kubenswrapper[38936]: I0216 21:38:12.144887 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config-secret\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.147104 master-0 kubenswrapper[38936]: I0216 21:38:12.146864 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.155877 master-0 kubenswrapper[38936]: I0216 21:38:12.151086 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config-secret\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.155877 master-0 kubenswrapper[38936]: I0216 21:38:12.152068 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.162959 master-0 kubenswrapper[38936]: I0216 21:38:12.162876 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 16 21:38:12.164666 master-0 kubenswrapper[38936]: E0216 21:38:12.164334 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-l2zhd], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="990aeedc-b2eb-4a75-b5bc-c76f0d18429c" Feb 16 21:38:12.179204 master-0 kubenswrapper[38936]: I0216 21:38:12.176466 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 16 21:38:12.190047 master-0 kubenswrapper[38936]: E0216 21:38:12.181963 38936 projected.go:194] Error preparing data for projected volume kube-api-access-l2zhd for pod openstack/openstackclient: failed to fetch token: pods "openstackclient" not found Feb 16 21:38:12.190047 master-0 kubenswrapper[38936]: E0216 21:38:12.182080 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-kube-api-access-l2zhd podName:990aeedc-b2eb-4a75-b5bc-c76f0d18429c nodeName:}" failed. No retries permitted until 2026-02-16 21:38:12.682056157 +0000 UTC m=+923.034059519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l2zhd" (UniqueName: "kubernetes.io/projected/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-kube-api-access-l2zhd") pod "openstackclient" (UID: "990aeedc-b2eb-4a75-b5bc-c76f0d18429c") : failed to fetch token: pods "openstackclient" not found Feb 16 21:38:12.264800 master-0 kubenswrapper[38936]: I0216 21:38:12.264747 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 21:38:12.266953 master-0 kubenswrapper[38936]: I0216 21:38:12.266909 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:38:12.281927 master-0 kubenswrapper[38936]: I0216 21:38:12.281855 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 21:38:12.353632 master-0 kubenswrapper[38936]: I0216 21:38:12.353570 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glpm9\" (UniqueName: \"kubernetes.io/projected/57c3a511-13e6-460a-9912-7b5ec3ca97fd-kube-api-access-glpm9\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.353882 master-0 kubenswrapper[38936]: I0216 21:38:12.353741 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/57c3a511-13e6-460a-9912-7b5ec3ca97fd-openstack-config\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.353882 master-0 kubenswrapper[38936]: I0216 21:38:12.353848 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c3a511-13e6-460a-9912-7b5ec3ca97fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.353972 master-0 kubenswrapper[38936]: I0216 21:38:12.353913 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/57c3a511-13e6-460a-9912-7b5ec3ca97fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.456255 master-0 kubenswrapper[38936]: I0216 21:38:12.456204 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c3a511-13e6-460a-9912-7b5ec3ca97fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.456367 master-0 kubenswrapper[38936]: I0216 21:38:12.456317 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/57c3a511-13e6-460a-9912-7b5ec3ca97fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.456438 master-0 kubenswrapper[38936]: I0216 21:38:12.456405 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glpm9\" (UniqueName: \"kubernetes.io/projected/57c3a511-13e6-460a-9912-7b5ec3ca97fd-kube-api-access-glpm9\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.457401 master-0 kubenswrapper[38936]: I0216 21:38:12.456500 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/57c3a511-13e6-460a-9912-7b5ec3ca97fd-openstack-config\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.457677 master-0 kubenswrapper[38936]: I0216 21:38:12.457638 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/57c3a511-13e6-460a-9912-7b5ec3ca97fd-openstack-config\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.460623 master-0 kubenswrapper[38936]: I0216 21:38:12.460581 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c3a511-13e6-460a-9912-7b5ec3ca97fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.462553 master-0 kubenswrapper[38936]: I0216 21:38:12.462470 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/57c3a511-13e6-460a-9912-7b5ec3ca97fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.483754 master-0 kubenswrapper[38936]: I0216 21:38:12.483680 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glpm9\" (UniqueName: \"kubernetes.io/projected/57c3a511-13e6-460a-9912-7b5ec3ca97fd-kube-api-access-glpm9\") pod \"openstackclient\" (UID: \"57c3a511-13e6-460a-9912-7b5ec3ca97fd\") " pod="openstack/openstackclient" Feb 16 21:38:12.598875 master-0 kubenswrapper[38936]: I0216 21:38:12.598819 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:38:12.645667 master-0 kubenswrapper[38936]: I0216 21:38:12.633446 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-backup-0" event={"ID":"c031566b-e048-44f7-9177-f1c1e05f4295","Type":"ContainerStarted","Data":"4372443064de38464bb5075ce85291eaa236bde80328081d9d6b7f3b193be400"} Feb 16 21:38:12.645667 master-0 kubenswrapper[38936]: I0216 21:38:12.633516 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-backup-0" event={"ID":"c031566b-e048-44f7-9177-f1c1e05f4295","Type":"ContainerStarted","Data":"2fc858822a05552c99739f7030afada3a2221ccfe0389de69747686140b7d937"} Feb 16 21:38:12.710669 master-0 kubenswrapper[38936]: I0216 21:38:12.703289 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-scheduler-0" event={"ID":"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a","Type":"ContainerStarted","Data":"e93064ddd29a7e2dbc33729c0e3d3beae6fc0adc9fe6374819bb752790e99699"} Feb 16 21:38:12.710669 master-0 kubenswrapper[38936]: I0216 21:38:12.705904 38936 generic.go:334] "Generic (PLEG): container finished" podID="df8afaf9-42b4-4639-855e-f6cf99c985bf" containerID="a39ce9c9349a01b157c3cd6408fd7bf95bfc82f86113d0b8b70425b87b949472" exitCode=0 Feb 16 21:38:12.710669 master-0 kubenswrapper[38936]: I0216 21:38:12.705957 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d6dfb9f68-58l7d" event={"ID":"df8afaf9-42b4-4639-855e-f6cf99c985bf","Type":"ContainerDied","Data":"a39ce9c9349a01b157c3cd6408fd7bf95bfc82f86113d0b8b70425b87b949472"} Feb 16 21:38:12.710669 master-0 kubenswrapper[38936]: I0216 21:38:12.710620 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5675994476-8qnnd" event={"ID":"beb6c329-ea6c-483c-9024-45d5e74f1a8b","Type":"ContainerStarted","Data":"53e52e0f90c02afd6b82e3f7bc2a9bc34e466fbceb5932318f67ebcba2f101a7"} Feb 16 21:38:12.711005 master-0 kubenswrapper[38936]: I0216 21:38:12.710703 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5675994476-8qnnd" event={"ID":"beb6c329-ea6c-483c-9024-45d5e74f1a8b","Type":"ContainerStarted","Data":"21f41765abdde52a688581ec83b90a98d655e6dc30ede7db17640dd54089cca7"} Feb 16 21:38:12.736439 master-0 kubenswrapper[38936]: I0216 21:38:12.736378 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85df85647b-4lmvj" event={"ID":"28720828-7566-4fb7-a4ff-ac6e548d9408","Type":"ContainerStarted","Data":"e2b479992138ac47d435ec9a072aa32d0628cce1504f072683f7e43f4379f95a"} Feb 16 21:38:12.759439 master-0 kubenswrapper[38936]: I0216 21:38:12.759211 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"37c815ef-1c3d-4b2a-b748-de04b8c4412c","Type":"ContainerStarted","Data":"3cd099264afe018e6cd94a440abe82cce8dbd27c1863785f0bab94a226f35a06"} Feb 16 21:38:12.775373 master-0 kubenswrapper[38936]: I0216 21:38:12.763252 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2zhd\" (UniqueName: \"kubernetes.io/projected/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-kube-api-access-l2zhd\") pod \"openstackclient\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " pod="openstack/openstackclient" Feb 16 21:38:12.775373 master-0 kubenswrapper[38936]: E0216 21:38:12.768175 38936 projected.go:194] Error preparing data for projected volume kube-api-access-l2zhd for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (990aeedc-b2eb-4a75-b5bc-c76f0d18429c) does not match the UID in record. The object might have been deleted and then recreated Feb 16 21:38:12.775373 master-0 kubenswrapper[38936]: E0216 21:38:12.768239 38936 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-kube-api-access-l2zhd podName:990aeedc-b2eb-4a75-b5bc-c76f0d18429c nodeName:}" failed. No retries permitted until 2026-02-16 21:38:13.768220961 +0000 UTC m=+924.120224323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l2zhd" (UniqueName: "kubernetes.io/projected/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-kube-api-access-l2zhd") pod "openstackclient" (UID: "990aeedc-b2eb-4a75-b5bc-c76f0d18429c") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (990aeedc-b2eb-4a75-b5bc-c76f0d18429c) does not match the UID in record. The object might have been deleted and then recreated Feb 16 21:38:12.792682 master-0 kubenswrapper[38936]: I0216 21:38:12.790035 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:38:12.792682 master-0 kubenswrapper[38936]: I0216 21:38:12.792407 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" event={"ID":"ef9f0999-cb38-43ec-98e8-c0ec09b4351b","Type":"ContainerStarted","Data":"1066f3dd1af513eee14bf8e51967a8b2e6130f97981ad442a65ed3f200bf5cee"} Feb 16 21:38:12.854851 master-0 kubenswrapper[38936]: I0216 21:38:12.853556 38936 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="990aeedc-b2eb-4a75-b5bc-c76f0d18429c" podUID="57c3a511-13e6-460a-9912-7b5ec3ca97fd" Feb 16 21:38:12.943970 master-0 kubenswrapper[38936]: I0216 21:38:12.943334 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" podStartSLOduration=5.94330876 podStartE2EDuration="5.94330876s" podCreationTimestamp="2026-02-16 21:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:12.919906588 +0000 UTC m=+923.271909960" watchObservedRunningTime="2026-02-16 21:38:12.94330876 +0000 UTC m=+923.295312122" Feb 16 21:38:13.101181 master-0 kubenswrapper[38936]: I0216 21:38:13.101124 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:13.153754 master-0 kubenswrapper[38936]: I0216 21:38:13.153355 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:38:13.213243 master-0 kubenswrapper[38936]: I0216 21:38:13.211416 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config-secret\") pod \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " Feb 16 21:38:13.213243 master-0 kubenswrapper[38936]: I0216 21:38:13.211527 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-combined-ca-bundle\") pod \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " Feb 16 21:38:13.213243 master-0 kubenswrapper[38936]: I0216 21:38:13.211782 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config\") pod \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\" (UID: \"990aeedc-b2eb-4a75-b5bc-c76f0d18429c\") " Feb 16 21:38:13.213243 master-0 kubenswrapper[38936]: I0216 21:38:13.212560 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2zhd\" (UniqueName: \"kubernetes.io/projected/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-kube-api-access-l2zhd\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:13.213243 master-0 kubenswrapper[38936]: I0216 21:38:13.213068 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "990aeedc-b2eb-4a75-b5bc-c76f0d18429c" (UID: "990aeedc-b2eb-4a75-b5bc-c76f0d18429c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:13.218801 master-0 kubenswrapper[38936]: I0216 21:38:13.218567 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "990aeedc-b2eb-4a75-b5bc-c76f0d18429c" (UID: "990aeedc-b2eb-4a75-b5bc-c76f0d18429c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:13.228780 master-0 kubenswrapper[38936]: I0216 21:38:13.227934 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "990aeedc-b2eb-4a75-b5bc-c76f0d18429c" (UID: "990aeedc-b2eb-4a75-b5bc-c76f0d18429c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:13.317732 master-0 kubenswrapper[38936]: I0216 21:38:13.317314 38936 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:13.317732 master-0 kubenswrapper[38936]: I0216 21:38:13.317357 38936 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-openstack-config-secret\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:13.317732 master-0 kubenswrapper[38936]: I0216 21:38:13.317373 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990aeedc-b2eb-4a75-b5bc-c76f0d18429c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:13.487390 master-0 kubenswrapper[38936]: I0216 21:38:13.487316 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 21:38:13.534440 master-0 kubenswrapper[38936]: W0216 21:38:13.533517 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57c3a511_13e6_460a_9912_7b5ec3ca97fd.slice/crio-3ea33171ac57c1fb82ffc9af7f9af6effb77919c5c1cd8e4528fa500312f7cf1 WatchSource:0}: Error finding container 3ea33171ac57c1fb82ffc9af7f9af6effb77919c5c1cd8e4528fa500312f7cf1: Status 404 returned error can't find the container with id 3ea33171ac57c1fb82ffc9af7f9af6effb77919c5c1cd8e4528fa500312f7cf1 Feb 16 21:38:13.829397 master-0 kubenswrapper[38936]: I0216 21:38:13.829257 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"57c3a511-13e6-460a-9912-7b5ec3ca97fd","Type":"ContainerStarted","Data":"3ea33171ac57c1fb82ffc9af7f9af6effb77919c5c1cd8e4528fa500312f7cf1"} Feb 16 21:38:13.859240 master-0 kubenswrapper[38936]: I0216 21:38:13.858239 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d6dfb9f68-58l7d" event={"ID":"df8afaf9-42b4-4639-855e-f6cf99c985bf","Type":"ContainerStarted","Data":"2e20103a9d011bea98edf99dc001ed41c9cee65ead55c0e22b4c1aa0c17fb32b"} Feb 16 21:38:13.868887 master-0 kubenswrapper[38936]: I0216 21:38:13.868816 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5675994476-8qnnd" event={"ID":"beb6c329-ea6c-483c-9024-45d5e74f1a8b","Type":"ContainerStarted","Data":"ec27a04a287471ec1d7c6c15770939d5dc5b67636fb5ecd745d623a8bb64e447"} Feb 16 21:38:13.871784 master-0 kubenswrapper[38936]: I0216 21:38:13.871689 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:13.871784 master-0 kubenswrapper[38936]: I0216 21:38:13.871749 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:13.917892 master-0 kubenswrapper[38936]: I0216 21:38:13.917834 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 21:38:13.928734 master-0 kubenswrapper[38936]: I0216 21:38:13.928536 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="990aeedc-b2eb-4a75-b5bc-c76f0d18429c" path="/var/lib/kubelet/pods/990aeedc-b2eb-4a75-b5bc-c76f0d18429c/volumes" Feb 16 21:38:13.929291 master-0 kubenswrapper[38936]: I0216 21:38:13.929256 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:13.929291 master-0 kubenswrapper[38936]: I0216 21:38:13.929283 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85df85647b-4lmvj" event={"ID":"28720828-7566-4fb7-a4ff-ac6e548d9408","Type":"ContainerStarted","Data":"250f22f9f7840d34873f41be403054710da85818667bbe6a1c6fcd1610c5ab9b"} Feb 16 21:38:13.929430 master-0 kubenswrapper[38936]: I0216 21:38:13.929301 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-backup-0" event={"ID":"c031566b-e048-44f7-9177-f1c1e05f4295","Type":"ContainerStarted","Data":"ffa413aeffbded7d8f30b452b059b87c6d0840b0e0b883c06c46339b97a1eaac"} Feb 16 21:38:13.945262 master-0 kubenswrapper[38936]: I0216 21:38:13.941509 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5675994476-8qnnd" podStartSLOduration=3.941484633 podStartE2EDuration="3.941484633s" podCreationTimestamp="2026-02-16 21:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:13.935764369 +0000 UTC m=+924.287767741" watchObservedRunningTime="2026-02-16 21:38:13.941484633 +0000 UTC m=+924.293487995" Feb 16 21:38:13.997763 master-0 kubenswrapper[38936]: I0216 21:38:13.996841 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-85df85647b-4lmvj" podStartSLOduration=7.902686497 podStartE2EDuration="10.996795447s" podCreationTimestamp="2026-02-16 21:38:03 +0000 UTC" firstStartedPulling="2026-02-16 21:38:05.926233012 +0000 UTC m=+916.278236384" lastFinishedPulling="2026-02-16 21:38:09.020341972 +0000 UTC m=+919.372345334" observedRunningTime="2026-02-16 21:38:13.976421437 +0000 UTC m=+924.328424799" watchObservedRunningTime="2026-02-16 21:38:13.996795447 +0000 UTC m=+924.348798809" Feb 16 21:38:14.035542 master-0 kubenswrapper[38936]: I0216 21:38:14.035410 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9c692-backup-0" podStartSLOduration=7.0353865 podStartE2EDuration="7.0353865s" podCreationTimestamp="2026-02-16 21:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:14.004989358 +0000 UTC m=+924.356992720" watchObservedRunningTime="2026-02-16 21:38:14.0353865 +0000 UTC m=+924.387389872" Feb 16 21:38:14.198809 master-0 kubenswrapper[38936]: I0216 21:38:14.198088 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:14.211886 master-0 kubenswrapper[38936]: E0216 21:38:14.210441 38936 log.go:32] "ExecSync cmd from runtime service failed" err=< Feb 16 21:38:14.211886 master-0 kubenswrapper[38936]: rpc error: code = Unknown desc = command error: read pipe failed Feb 16 21:38:14.211886 master-0 kubenswrapper[38936]: , stdout: , stderr: , exit code -1 Feb 16 21:38:14.211886 master-0 kubenswrapper[38936]: > containerID="a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e" cmd=["/bin/true"] Feb 16 21:38:14.215034 master-0 kubenswrapper[38936]: E0216 21:38:14.214918 38936 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e is running failed: container process not found" containerID="a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e" cmd=["/bin/true"] Feb 16 21:38:14.216319 master-0 kubenswrapper[38936]: E0216 21:38:14.216261 38936 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e is running failed: container process not found" containerID="a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e" cmd=["/bin/true"] Feb 16 21:38:14.216404 master-0 kubenswrapper[38936]: E0216 21:38:14.216325 38936 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" podUID="cfcdcd18-dd01-45c8-afd4-ec72a986d582" containerName="ironic-neutron-agent" Feb 16 21:38:14.851741 master-0 kubenswrapper[38936]: I0216 21:38:14.850977 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:14.994894 master-0 kubenswrapper[38936]: I0216 21:38:14.994482 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78bc59585f-clvzn"] Feb 16 21:38:15.019678 master-0 kubenswrapper[38936]: I0216 21:38:15.015592 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" podUID="2de78a17-9736-4d1a-bd15-d021bf007026" containerName="dnsmasq-dns" containerID="cri-o://76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1" gracePeriod=10 Feb 16 21:38:15.042716 master-0 kubenswrapper[38936]: I0216 21:38:15.039716 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6d6dfb9f68-58l7d" event={"ID":"df8afaf9-42b4-4639-855e-f6cf99c985bf","Type":"ContainerStarted","Data":"9598e05e173cddadfe8a0593cc2022de8ab16e7f6d05bfd5cb047d8819039cfb"} Feb 16 21:38:15.042716 master-0 kubenswrapper[38936]: I0216 21:38:15.040851 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:15.056112 master-0 kubenswrapper[38936]: I0216 21:38:15.055116 38936 generic.go:334] "Generic (PLEG): container finished" podID="cfcdcd18-dd01-45c8-afd4-ec72a986d582" containerID="a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e" exitCode=1 Feb 16 21:38:15.056112 master-0 kubenswrapper[38936]: I0216 21:38:15.055193 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" event={"ID":"cfcdcd18-dd01-45c8-afd4-ec72a986d582","Type":"ContainerDied","Data":"a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e"} Feb 16 21:38:15.056528 master-0 kubenswrapper[38936]: I0216 21:38:15.056136 38936 scope.go:117] "RemoveContainer" containerID="a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e" Feb 16 21:38:15.065737 master-0 kubenswrapper[38936]: I0216 21:38:15.065689 38936 generic.go:334] "Generic (PLEG): container finished" podID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerID="250f22f9f7840d34873f41be403054710da85818667bbe6a1c6fcd1610c5ab9b" exitCode=1 Feb 16 21:38:15.065849 master-0 kubenswrapper[38936]: I0216 21:38:15.065788 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85df85647b-4lmvj" event={"ID":"28720828-7566-4fb7-a4ff-ac6e548d9408","Type":"ContainerDied","Data":"250f22f9f7840d34873f41be403054710da85818667bbe6a1c6fcd1610c5ab9b"} Feb 16 21:38:15.066432 master-0 kubenswrapper[38936]: I0216 21:38:15.066399 38936 scope.go:117] "RemoveContainer" containerID="250f22f9f7840d34873f41be403054710da85818667bbe6a1c6fcd1610c5ab9b" Feb 16 21:38:15.088176 master-0 kubenswrapper[38936]: I0216 21:38:15.086633 38936 generic.go:334] "Generic (PLEG): container finished" podID="37c815ef-1c3d-4b2a-b748-de04b8c4412c" containerID="3cd099264afe018e6cd94a440abe82cce8dbd27c1863785f0bab94a226f35a06" exitCode=0 Feb 16 21:38:15.088176 master-0 kubenswrapper[38936]: I0216 21:38:15.086727 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"37c815ef-1c3d-4b2a-b748-de04b8c4412c","Type":"ContainerDied","Data":"3cd099264afe018e6cd94a440abe82cce8dbd27c1863785f0bab94a226f35a06"} Feb 16 21:38:15.088176 master-0 kubenswrapper[38936]: I0216 21:38:15.087576 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-6d6dfb9f68-58l7d" podStartSLOduration=8.087564781 podStartE2EDuration="8.087564781s" podCreationTimestamp="2026-02-16 21:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:15.082397432 +0000 UTC m=+925.434400794" watchObservedRunningTime="2026-02-16 21:38:15.087564781 +0000 UTC m=+925.439568143" Feb 16 21:38:15.098425 master-0 kubenswrapper[38936]: I0216 21:38:15.095009 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-scheduler-0" event={"ID":"eaf5b21d-171c-46b8-bbc1-26f1af47aa0a","Type":"ContainerStarted","Data":"426d5f1407d58e4b2d4dc3d2d3401cadc8fce946fb63dd71ca08cc65b83c72e4"} Feb 16 21:38:15.268763 master-0 kubenswrapper[38936]: I0216 21:38:15.268501 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9c692-scheduler-0" podStartSLOduration=8.267758034 podStartE2EDuration="8.267758034s" podCreationTimestamp="2026-02-16 21:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:15.166638787 +0000 UTC m=+925.518642149" watchObservedRunningTime="2026-02-16 21:38:15.267758034 +0000 UTC m=+925.619761406" Feb 16 21:38:16.024849 master-0 kubenswrapper[38936]: I0216 21:38:16.024744 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7fd65686d6-7ht5b"] Feb 16 21:38:16.028035 master-0 kubenswrapper[38936]: I0216 21:38:16.028004 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.032246 master-0 kubenswrapper[38936]: I0216 21:38:16.031918 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 21:38:16.032246 master-0 kubenswrapper[38936]: I0216 21:38:16.032147 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 21:38:16.032593 master-0 kubenswrapper[38936]: I0216 21:38:16.032497 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 21:38:16.041288 master-0 kubenswrapper[38936]: I0216 21:38:16.041180 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7fd65686d6-7ht5b"] Feb 16 21:38:16.116532 master-0 kubenswrapper[38936]: I0216 21:38:16.116468 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:38:16.123545 master-0 kubenswrapper[38936]: I0216 21:38:16.122867 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85df85647b-4lmvj" event={"ID":"28720828-7566-4fb7-a4ff-ac6e548d9408","Type":"ContainerStarted","Data":"3f333dfc41a573efffb2f25b161cfbaac916d708857817df4fab44fd0d1e6f6c"} Feb 16 21:38:16.123545 master-0 kubenswrapper[38936]: I0216 21:38:16.123052 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:16.151924 master-0 kubenswrapper[38936]: I0216 21:38:16.151833 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef66440d-1b5d-4de9-a1c0-05f4def18451-log-httpd\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.152403 master-0 kubenswrapper[38936]: I0216 21:38:16.152315 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-public-tls-certs\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.152688 master-0 kubenswrapper[38936]: I0216 21:38:16.152664 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-internal-tls-certs\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.152975 master-0 kubenswrapper[38936]: I0216 21:38:16.152929 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnr7v\" (UniqueName: \"kubernetes.io/projected/ef66440d-1b5d-4de9-a1c0-05f4def18451-kube-api-access-pnr7v\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.153164 master-0 kubenswrapper[38936]: I0216 21:38:16.153117 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef66440d-1b5d-4de9-a1c0-05f4def18451-run-httpd\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.153164 master-0 kubenswrapper[38936]: I0216 21:38:16.153154 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef66440d-1b5d-4de9-a1c0-05f4def18451-etc-swift\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.153566 master-0 kubenswrapper[38936]: I0216 21:38:16.153468 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-config-data\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.153566 master-0 kubenswrapper[38936]: I0216 21:38:16.153551 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-combined-ca-bundle\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.161774 master-0 kubenswrapper[38936]: I0216 21:38:16.161501 38936 generic.go:334] "Generic (PLEG): container finished" podID="2de78a17-9736-4d1a-bd15-d021bf007026" containerID="76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1" exitCode=0 Feb 16 21:38:16.161774 master-0 kubenswrapper[38936]: I0216 21:38:16.161592 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" event={"ID":"2de78a17-9736-4d1a-bd15-d021bf007026","Type":"ContainerDied","Data":"76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1"} Feb 16 21:38:16.161774 master-0 kubenswrapper[38936]: I0216 21:38:16.161625 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" event={"ID":"2de78a17-9736-4d1a-bd15-d021bf007026","Type":"ContainerDied","Data":"6f3797b5486abbf6a114be52e91236a1f7182b5a56fd1bb1e9106cb0e2dcc0f3"} Feb 16 21:38:16.161774 master-0 kubenswrapper[38936]: I0216 21:38:16.161661 38936 scope.go:117] "RemoveContainer" containerID="76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1" Feb 16 21:38:16.161982 master-0 kubenswrapper[38936]: I0216 21:38:16.161829 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78bc59585f-clvzn" Feb 16 21:38:16.189599 master-0 kubenswrapper[38936]: I0216 21:38:16.189237 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" event={"ID":"cfcdcd18-dd01-45c8-afd4-ec72a986d582","Type":"ContainerStarted","Data":"fb05b673f156b593e3b1b5a2aae5b398fe33f469a0be1c0338b6e9e0eaa7f21f"} Feb 16 21:38:16.190461 master-0 kubenswrapper[38936]: I0216 21:38:16.190319 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:16.256260 master-0 kubenswrapper[38936]: I0216 21:38:16.255085 38936 scope.go:117] "RemoveContainer" containerID="cb57f596593bc0aa7163e8d0c523a09ebfd69008c7569d4d99046feddc1deb40" Feb 16 21:38:16.256260 master-0 kubenswrapper[38936]: I0216 21:38:16.255378 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-config\") pod \"2de78a17-9736-4d1a-bd15-d021bf007026\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " Feb 16 21:38:16.256260 master-0 kubenswrapper[38936]: I0216 21:38:16.255485 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-swift-storage-0\") pod \"2de78a17-9736-4d1a-bd15-d021bf007026\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " Feb 16 21:38:16.256260 master-0 kubenswrapper[38936]: I0216 21:38:16.255549 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-svc\") pod \"2de78a17-9736-4d1a-bd15-d021bf007026\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " Feb 16 21:38:16.256260 master-0 kubenswrapper[38936]: I0216 21:38:16.255626 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-sb\") pod \"2de78a17-9736-4d1a-bd15-d021bf007026\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " Feb 16 21:38:16.256260 master-0 kubenswrapper[38936]: I0216 21:38:16.255809 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwdlz\" (UniqueName: \"kubernetes.io/projected/2de78a17-9736-4d1a-bd15-d021bf007026-kube-api-access-xwdlz\") pod \"2de78a17-9736-4d1a-bd15-d021bf007026\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " Feb 16 21:38:16.256260 master-0 kubenswrapper[38936]: I0216 21:38:16.255929 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-nb\") pod \"2de78a17-9736-4d1a-bd15-d021bf007026\" (UID: \"2de78a17-9736-4d1a-bd15-d021bf007026\") " Feb 16 21:38:16.257846 master-0 kubenswrapper[38936]: I0216 21:38:16.257297 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-public-tls-certs\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.257846 master-0 kubenswrapper[38936]: I0216 21:38:16.257358 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-internal-tls-certs\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.258908 master-0 kubenswrapper[38936]: I0216 21:38:16.258214 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnr7v\" (UniqueName: \"kubernetes.io/projected/ef66440d-1b5d-4de9-a1c0-05f4def18451-kube-api-access-pnr7v\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.258908 master-0 kubenswrapper[38936]: I0216 21:38:16.258342 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef66440d-1b5d-4de9-a1c0-05f4def18451-run-httpd\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.258908 master-0 kubenswrapper[38936]: I0216 21:38:16.258370 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef66440d-1b5d-4de9-a1c0-05f4def18451-etc-swift\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.258908 master-0 kubenswrapper[38936]: I0216 21:38:16.258461 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-config-data\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.258908 master-0 kubenswrapper[38936]: I0216 21:38:16.258482 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-combined-ca-bundle\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.258908 master-0 kubenswrapper[38936]: I0216 21:38:16.258534 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef66440d-1b5d-4de9-a1c0-05f4def18451-log-httpd\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.268325 master-0 kubenswrapper[38936]: I0216 21:38:16.268264 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de78a17-9736-4d1a-bd15-d021bf007026-kube-api-access-xwdlz" (OuterVolumeSpecName: "kube-api-access-xwdlz") pod "2de78a17-9736-4d1a-bd15-d021bf007026" (UID: "2de78a17-9736-4d1a-bd15-d021bf007026"). InnerVolumeSpecName "kube-api-access-xwdlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:16.273537 master-0 kubenswrapper[38936]: I0216 21:38:16.273479 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-combined-ca-bundle\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.273803 master-0 kubenswrapper[38936]: I0216 21:38:16.273538 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef66440d-1b5d-4de9-a1c0-05f4def18451-run-httpd\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.276711 master-0 kubenswrapper[38936]: I0216 21:38:16.276402 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef66440d-1b5d-4de9-a1c0-05f4def18451-log-httpd\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.276711 master-0 kubenswrapper[38936]: I0216 21:38:16.276455 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-public-tls-certs\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.288354 master-0 kubenswrapper[38936]: I0216 21:38:16.288260 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-internal-tls-certs\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.290017 master-0 kubenswrapper[38936]: I0216 21:38:16.289954 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef66440d-1b5d-4de9-a1c0-05f4def18451-config-data\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.300280 master-0 kubenswrapper[38936]: I0216 21:38:16.300229 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef66440d-1b5d-4de9-a1c0-05f4def18451-etc-swift\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.342442 master-0 kubenswrapper[38936]: I0216 21:38:16.342356 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2de78a17-9736-4d1a-bd15-d021bf007026" (UID: "2de78a17-9736-4d1a-bd15-d021bf007026"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:16.366239 master-0 kubenswrapper[38936]: I0216 21:38:16.366148 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2de78a17-9736-4d1a-bd15-d021bf007026" (UID: "2de78a17-9736-4d1a-bd15-d021bf007026"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:16.368956 master-0 kubenswrapper[38936]: I0216 21:38:16.368888 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2de78a17-9736-4d1a-bd15-d021bf007026" (UID: "2de78a17-9736-4d1a-bd15-d021bf007026"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:16.373596 master-0 kubenswrapper[38936]: I0216 21:38:16.373533 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:16.373720 master-0 kubenswrapper[38936]: I0216 21:38:16.373696 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:16.373720 master-0 kubenswrapper[38936]: I0216 21:38:16.373719 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:16.374443 master-0 kubenswrapper[38936]: I0216 21:38:16.373731 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwdlz\" (UniqueName: \"kubernetes.io/projected/2de78a17-9736-4d1a-bd15-d021bf007026-kube-api-access-xwdlz\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:16.378267 master-0 kubenswrapper[38936]: I0216 21:38:16.376419 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2de78a17-9736-4d1a-bd15-d021bf007026" (UID: "2de78a17-9736-4d1a-bd15-d021bf007026"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:16.383847 master-0 kubenswrapper[38936]: I0216 21:38:16.383787 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnr7v\" (UniqueName: \"kubernetes.io/projected/ef66440d-1b5d-4de9-a1c0-05f4def18451-kube-api-access-pnr7v\") pod \"swift-proxy-7fd65686d6-7ht5b\" (UID: \"ef66440d-1b5d-4de9-a1c0-05f4def18451\") " pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.393317 master-0 kubenswrapper[38936]: I0216 21:38:16.393254 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-config" (OuterVolumeSpecName: "config") pod "2de78a17-9736-4d1a-bd15-d021bf007026" (UID: "2de78a17-9736-4d1a-bd15-d021bf007026"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:16.459083 master-0 kubenswrapper[38936]: I0216 21:38:16.458987 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:16.477465 master-0 kubenswrapper[38936]: I0216 21:38:16.477384 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:16.477465 master-0 kubenswrapper[38936]: I0216 21:38:16.477444 38936 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2de78a17-9736-4d1a-bd15-d021bf007026-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:16.643811 master-0 kubenswrapper[38936]: I0216 21:38:16.643734 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78bc59585f-clvzn"] Feb 16 21:38:16.648551 master-0 kubenswrapper[38936]: I0216 21:38:16.648158 38936 scope.go:117] "RemoveContainer" containerID="76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1" Feb 16 21:38:16.649201 master-0 kubenswrapper[38936]: E0216 21:38:16.649141 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1\": container with ID starting with 76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1 not found: ID does not exist" containerID="76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1" Feb 16 21:38:16.649448 master-0 kubenswrapper[38936]: I0216 21:38:16.649205 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1"} err="failed to get container status \"76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1\": rpc error: code = NotFound desc = could not find container \"76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1\": container with ID starting with 76bf5cf1b16c4dc6ff22333b0bc595c99a65f23a7e8d70b1375aa04da32957c1 not found: ID does not exist" Feb 16 21:38:16.649519 master-0 kubenswrapper[38936]: I0216 21:38:16.649507 38936 scope.go:117] "RemoveContainer" containerID="cb57f596593bc0aa7163e8d0c523a09ebfd69008c7569d4d99046feddc1deb40" Feb 16 21:38:16.650181 master-0 kubenswrapper[38936]: E0216 21:38:16.649945 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb57f596593bc0aa7163e8d0c523a09ebfd69008c7569d4d99046feddc1deb40\": container with ID starting with cb57f596593bc0aa7163e8d0c523a09ebfd69008c7569d4d99046feddc1deb40 not found: ID does not exist" containerID="cb57f596593bc0aa7163e8d0c523a09ebfd69008c7569d4d99046feddc1deb40" Feb 16 21:38:16.650181 master-0 kubenswrapper[38936]: I0216 21:38:16.649986 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb57f596593bc0aa7163e8d0c523a09ebfd69008c7569d4d99046feddc1deb40"} err="failed to get container status \"cb57f596593bc0aa7163e8d0c523a09ebfd69008c7569d4d99046feddc1deb40\": rpc error: code = NotFound desc = could not find container \"cb57f596593bc0aa7163e8d0c523a09ebfd69008c7569d4d99046feddc1deb40\": container with ID starting with cb57f596593bc0aa7163e8d0c523a09ebfd69008c7569d4d99046feddc1deb40 not found: ID does not exist" Feb 16 21:38:16.667845 master-0 kubenswrapper[38936]: I0216 21:38:16.667754 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78bc59585f-clvzn"] Feb 16 21:38:17.032511 master-0 kubenswrapper[38936]: I0216 21:38:17.031921 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7fd65686d6-7ht5b"] Feb 16 21:38:17.222166 master-0 kubenswrapper[38936]: I0216 21:38:17.221727 38936 generic.go:334] "Generic (PLEG): container finished" podID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerID="3f333dfc41a573efffb2f25b161cfbaac916d708857817df4fab44fd0d1e6f6c" exitCode=1 Feb 16 21:38:17.222166 master-0 kubenswrapper[38936]: I0216 21:38:17.221946 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85df85647b-4lmvj" event={"ID":"28720828-7566-4fb7-a4ff-ac6e548d9408","Type":"ContainerDied","Data":"3f333dfc41a573efffb2f25b161cfbaac916d708857817df4fab44fd0d1e6f6c"} Feb 16 21:38:17.222166 master-0 kubenswrapper[38936]: I0216 21:38:17.222129 38936 scope.go:117] "RemoveContainer" containerID="250f22f9f7840d34873f41be403054710da85818667bbe6a1c6fcd1610c5ab9b" Feb 16 21:38:17.223184 master-0 kubenswrapper[38936]: I0216 21:38:17.223113 38936 scope.go:117] "RemoveContainer" containerID="3f333dfc41a573efffb2f25b161cfbaac916d708857817df4fab44fd0d1e6f6c" Feb 16 21:38:17.223492 master-0 kubenswrapper[38936]: E0216 21:38:17.223459 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-85df85647b-4lmvj_openstack(28720828-7566-4fb7-a4ff-ac6e548d9408)\"" pod="openstack/ironic-85df85647b-4lmvj" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" Feb 16 21:38:17.225069 master-0 kubenswrapper[38936]: I0216 21:38:17.225010 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7fd65686d6-7ht5b" event={"ID":"ef66440d-1b5d-4de9-a1c0-05f4def18451","Type":"ContainerStarted","Data":"eef714670e149a82af2cad21202a4fbc9ca9754515faff783da17459949e0a0e"} Feb 16 21:38:17.910587 master-0 kubenswrapper[38936]: I0216 21:38:17.909870 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2de78a17-9736-4d1a-bd15-d021bf007026" path="/var/lib/kubelet/pods/2de78a17-9736-4d1a-bd15-d021bf007026/volumes" Feb 16 21:38:18.047760 master-0 kubenswrapper[38936]: I0216 21:38:18.047675 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:18.255394 master-0 kubenswrapper[38936]: I0216 21:38:18.254930 38936 scope.go:117] "RemoveContainer" containerID="3f333dfc41a573efffb2f25b161cfbaac916d708857817df4fab44fd0d1e6f6c" Feb 16 21:38:18.255394 master-0 kubenswrapper[38936]: E0216 21:38:18.255286 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-85df85647b-4lmvj_openstack(28720828-7566-4fb7-a4ff-ac6e548d9408)\"" pod="openstack/ironic-85df85647b-4lmvj" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" Feb 16 21:38:18.259532 master-0 kubenswrapper[38936]: I0216 21:38:18.259452 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7fd65686d6-7ht5b" event={"ID":"ef66440d-1b5d-4de9-a1c0-05f4def18451","Type":"ContainerStarted","Data":"c340450334be7224841885d36ca0e9062cc416738b28efbd37f8ab0e26091d40"} Feb 16 21:38:18.259532 master-0 kubenswrapper[38936]: I0216 21:38:18.259537 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7fd65686d6-7ht5b" event={"ID":"ef66440d-1b5d-4de9-a1c0-05f4def18451","Type":"ContainerStarted","Data":"238e7db89ea7a31f5e71687427df98cb9bce93477ba6fa66ba89b72b44b0bbb8"} Feb 16 21:38:18.259921 master-0 kubenswrapper[38936]: I0216 21:38:18.259773 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:18.259921 master-0 kubenswrapper[38936]: I0216 21:38:18.259900 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:18.313320 master-0 kubenswrapper[38936]: I0216 21:38:18.313242 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-9c692-backup-0" Feb 16 21:38:18.337814 master-0 kubenswrapper[38936]: I0216 21:38:18.337562 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7fd65686d6-7ht5b" podStartSLOduration=3.337531499 podStartE2EDuration="3.337531499s" podCreationTimestamp="2026-02-16 21:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:18.330565343 +0000 UTC m=+928.682568705" watchObservedRunningTime="2026-02-16 21:38:18.337531499 +0000 UTC m=+928.689534861" Feb 16 21:38:18.379300 master-0 kubenswrapper[38936]: I0216 21:38:18.379233 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:18.450640 master-0 kubenswrapper[38936]: I0216 21:38:18.442420 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-9c692-volume-lvm-iscsi-0" Feb 16 21:38:18.603505 master-0 kubenswrapper[38936]: I0216 21:38:18.602877 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-87hwd"] Feb 16 21:38:18.604991 master-0 kubenswrapper[38936]: E0216 21:38:18.604949 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de78a17-9736-4d1a-bd15-d021bf007026" containerName="init" Feb 16 21:38:18.605135 master-0 kubenswrapper[38936]: I0216 21:38:18.605111 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de78a17-9736-4d1a-bd15-d021bf007026" containerName="init" Feb 16 21:38:18.605340 master-0 kubenswrapper[38936]: E0216 21:38:18.605249 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de78a17-9736-4d1a-bd15-d021bf007026" containerName="dnsmasq-dns" Feb 16 21:38:18.605340 master-0 kubenswrapper[38936]: I0216 21:38:18.605272 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de78a17-9736-4d1a-bd15-d021bf007026" containerName="dnsmasq-dns" Feb 16 21:38:18.606116 master-0 kubenswrapper[38936]: I0216 21:38:18.605997 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de78a17-9736-4d1a-bd15-d021bf007026" containerName="dnsmasq-dns" Feb 16 21:38:18.607751 master-0 kubenswrapper[38936]: I0216 21:38:18.607708 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.616820 master-0 kubenswrapper[38936]: I0216 21:38:18.616766 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 16 21:38:18.617136 master-0 kubenswrapper[38936]: I0216 21:38:18.617076 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 16 21:38:18.619098 master-0 kubenswrapper[38936]: I0216 21:38:18.619030 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-87hwd"] Feb 16 21:38:18.686311 master-0 kubenswrapper[38936]: I0216 21:38:18.686238 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-config\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.686679 master-0 kubenswrapper[38936]: I0216 21:38:18.686631 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-scripts\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.687079 master-0 kubenswrapper[38936]: I0216 21:38:18.687052 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.687638 master-0 kubenswrapper[38936]: I0216 21:38:18.687616 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jl22\" (UniqueName: \"kubernetes.io/projected/650c4ac6-fc3c-4a97-871d-65c399538b17-kube-api-access-8jl22\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.687864 master-0 kubenswrapper[38936]: I0216 21:38:18.687802 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-combined-ca-bundle\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.688055 master-0 kubenswrapper[38936]: I0216 21:38:18.688033 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.688309 master-0 kubenswrapper[38936]: I0216 21:38:18.688277 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/650c4ac6-fc3c-4a97-871d-65c399538b17-etc-podinfo\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.776354 master-0 kubenswrapper[38936]: I0216 21:38:18.776277 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-9c692-scheduler-0" Feb 16 21:38:18.791401 master-0 kubenswrapper[38936]: I0216 21:38:18.790863 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-config\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.791767 master-0 kubenswrapper[38936]: I0216 21:38:18.791419 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-scripts\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.791815 master-0 kubenswrapper[38936]: I0216 21:38:18.791637 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.792791 master-0 kubenswrapper[38936]: I0216 21:38:18.791877 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jl22\" (UniqueName: \"kubernetes.io/projected/650c4ac6-fc3c-4a97-871d-65c399538b17-kube-api-access-8jl22\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.792791 master-0 kubenswrapper[38936]: I0216 21:38:18.792169 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-combined-ca-bundle\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.792791 master-0 kubenswrapper[38936]: I0216 21:38:18.792377 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.792791 master-0 kubenswrapper[38936]: I0216 21:38:18.792597 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/650c4ac6-fc3c-4a97-871d-65c399538b17-etc-podinfo\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.795756 master-0 kubenswrapper[38936]: I0216 21:38:18.794916 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.795756 master-0 kubenswrapper[38936]: I0216 21:38:18.794957 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.798315 master-0 kubenswrapper[38936]: I0216 21:38:18.798110 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/650c4ac6-fc3c-4a97-871d-65c399538b17-etc-podinfo\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.799387 master-0 kubenswrapper[38936]: I0216 21:38:18.799272 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-combined-ca-bundle\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.800798 master-0 kubenswrapper[38936]: I0216 21:38:18.799736 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-config\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.811484 master-0 kubenswrapper[38936]: I0216 21:38:18.808682 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-scripts\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.888830 master-0 kubenswrapper[38936]: I0216 21:38:18.888517 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jl22\" (UniqueName: \"kubernetes.io/projected/650c4ac6-fc3c-4a97-871d-65c399538b17-kube-api-access-8jl22\") pod \"ironic-inspector-db-sync-87hwd\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:18.947699 master-0 kubenswrapper[38936]: I0216 21:38:18.943353 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:19.024099 master-0 kubenswrapper[38936]: I0216 21:38:19.021473 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1d7ec-default-external-api-0"] Feb 16 21:38:19.024099 master-0 kubenswrapper[38936]: I0216 21:38:19.021760 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-1d7ec-default-external-api-0" podUID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerName="glance-log" containerID="cri-o://6f4e8719f5527ad2ee86d1f241ab0cc69e1583c0ea0856d330823fb8bceaa9db" gracePeriod=30 Feb 16 21:38:19.024099 master-0 kubenswrapper[38936]: I0216 21:38:19.021956 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-1d7ec-default-external-api-0" podUID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerName="glance-httpd" containerID="cri-o://157e64f8cf2685cad3ffbb0a0891c2fa817cc2ba34fb2b694cca8cb0f39044c0" gracePeriod=30 Feb 16 21:38:19.248782 master-0 kubenswrapper[38936]: I0216 21:38:19.247042 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:19.298171 master-0 kubenswrapper[38936]: I0216 21:38:19.298059 38936 generic.go:334] "Generic (PLEG): container finished" podID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerID="6f4e8719f5527ad2ee86d1f241ab0cc69e1583c0ea0856d330823fb8bceaa9db" exitCode=143 Feb 16 21:38:19.302711 master-0 kubenswrapper[38936]: I0216 21:38:19.299292 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-external-api-0" event={"ID":"8318eb20-824e-49c4-87b3-36784a1fc4db","Type":"ContainerDied","Data":"6f4e8719f5527ad2ee86d1f241ab0cc69e1583c0ea0856d330823fb8bceaa9db"} Feb 16 21:38:19.919638 master-0 kubenswrapper[38936]: I0216 21:38:19.919202 38936 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:19.920657 master-0 kubenswrapper[38936]: I0216 21:38:19.920389 38936 scope.go:117] "RemoveContainer" containerID="3f333dfc41a573efffb2f25b161cfbaac916d708857817df4fab44fd0d1e6f6c" Feb 16 21:38:19.920774 master-0 kubenswrapper[38936]: E0216 21:38:19.920667 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-85df85647b-4lmvj_openstack(28720828-7566-4fb7-a4ff-ac6e548d9408)\"" pod="openstack/ironic-85df85647b-4lmvj" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" Feb 16 21:38:20.233095 master-0 kubenswrapper[38936]: I0216 21:38:20.231511 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-6d6dfb9f68-58l7d" Feb 16 21:38:20.344578 master-0 kubenswrapper[38936]: I0216 21:38:20.344203 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-fntqx"] Feb 16 21:38:20.368134 master-0 kubenswrapper[38936]: I0216 21:38:20.366855 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-85df85647b-4lmvj"] Feb 16 21:38:20.368372 master-0 kubenswrapper[38936]: I0216 21:38:20.368322 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fntqx"] Feb 16 21:38:20.368372 master-0 kubenswrapper[38936]: I0216 21:38:20.367049 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fntqx" Feb 16 21:38:20.409834 master-0 kubenswrapper[38936]: I0216 21:38:20.408201 38936 generic.go:334] "Generic (PLEG): container finished" podID="cfcdcd18-dd01-45c8-afd4-ec72a986d582" containerID="fb05b673f156b593e3b1b5a2aae5b398fe33f469a0be1c0338b6e9e0eaa7f21f" exitCode=1 Feb 16 21:38:20.409834 master-0 kubenswrapper[38936]: I0216 21:38:20.408475 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-85df85647b-4lmvj" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="ironic-api-log" containerID="cri-o://e2b479992138ac47d435ec9a072aa32d0628cce1504f072683f7e43f4379f95a" gracePeriod=60 Feb 16 21:38:20.409834 master-0 kubenswrapper[38936]: I0216 21:38:20.408919 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" event={"ID":"cfcdcd18-dd01-45c8-afd4-ec72a986d582","Type":"ContainerDied","Data":"fb05b673f156b593e3b1b5a2aae5b398fe33f469a0be1c0338b6e9e0eaa7f21f"} Feb 16 21:38:20.409834 master-0 kubenswrapper[38936]: I0216 21:38:20.408969 38936 scope.go:117] "RemoveContainer" containerID="a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e" Feb 16 21:38:20.414616 master-0 kubenswrapper[38936]: I0216 21:38:20.413496 38936 scope.go:117] "RemoveContainer" containerID="fb05b673f156b593e3b1b5a2aae5b398fe33f469a0be1c0338b6e9e0eaa7f21f" Feb 16 21:38:20.414616 master-0 kubenswrapper[38936]: E0216 21:38:20.414177 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-57f476567b-fwqws_openstack(cfcdcd18-dd01-45c8-afd4-ec72a986d582)\"" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" podUID="cfcdcd18-dd01-45c8-afd4-ec72a986d582" Feb 16 21:38:20.519231 master-0 kubenswrapper[38936]: I0216 21:38:20.517698 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mswds\" (UniqueName: \"kubernetes.io/projected/aab575a9-488c-44b1-a7e0-3025fa81207e-kube-api-access-mswds\") pod \"nova-api-db-create-fntqx\" (UID: \"aab575a9-488c-44b1-a7e0-3025fa81207e\") " pod="openstack/nova-api-db-create-fntqx" Feb 16 21:38:20.519231 master-0 kubenswrapper[38936]: I0216 21:38:20.517864 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab575a9-488c-44b1-a7e0-3025fa81207e-operator-scripts\") pod \"nova-api-db-create-fntqx\" (UID: \"aab575a9-488c-44b1-a7e0-3025fa81207e\") " pod="openstack/nova-api-db-create-fntqx" Feb 16 21:38:20.567364 master-0 kubenswrapper[38936]: I0216 21:38:20.567266 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jb9gg"] Feb 16 21:38:20.570965 master-0 kubenswrapper[38936]: I0216 21:38:20.570884 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jb9gg" Feb 16 21:38:20.584039 master-0 kubenswrapper[38936]: I0216 21:38:20.583970 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jb9gg"] Feb 16 21:38:20.634433 master-0 kubenswrapper[38936]: I0216 21:38:20.634318 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mswds\" (UniqueName: \"kubernetes.io/projected/aab575a9-488c-44b1-a7e0-3025fa81207e-kube-api-access-mswds\") pod \"nova-api-db-create-fntqx\" (UID: \"aab575a9-488c-44b1-a7e0-3025fa81207e\") " pod="openstack/nova-api-db-create-fntqx" Feb 16 21:38:20.634643 master-0 kubenswrapper[38936]: I0216 21:38:20.634468 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab575a9-488c-44b1-a7e0-3025fa81207e-operator-scripts\") pod \"nova-api-db-create-fntqx\" (UID: \"aab575a9-488c-44b1-a7e0-3025fa81207e\") " pod="openstack/nova-api-db-create-fntqx" Feb 16 21:38:20.634643 master-0 kubenswrapper[38936]: I0216 21:38:20.634525 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq2bg\" (UniqueName: \"kubernetes.io/projected/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-kube-api-access-nq2bg\") pod \"nova-cell0-db-create-jb9gg\" (UID: \"a7f3ca2c-2ba6-4148-a4e8-843943926a5c\") " pod="openstack/nova-cell0-db-create-jb9gg" Feb 16 21:38:20.634793 master-0 kubenswrapper[38936]: I0216 21:38:20.634661 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-operator-scripts\") pod \"nova-cell0-db-create-jb9gg\" (UID: \"a7f3ca2c-2ba6-4148-a4e8-843943926a5c\") " pod="openstack/nova-cell0-db-create-jb9gg" Feb 16 21:38:20.635452 master-0 kubenswrapper[38936]: I0216 21:38:20.635403 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab575a9-488c-44b1-a7e0-3025fa81207e-operator-scripts\") pod \"nova-api-db-create-fntqx\" (UID: \"aab575a9-488c-44b1-a7e0-3025fa81207e\") " pod="openstack/nova-api-db-create-fntqx" Feb 16 21:38:20.739149 master-0 kubenswrapper[38936]: I0216 21:38:20.738475 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-operator-scripts\") pod \"nova-cell0-db-create-jb9gg\" (UID: \"a7f3ca2c-2ba6-4148-a4e8-843943926a5c\") " pod="openstack/nova-cell0-db-create-jb9gg" Feb 16 21:38:20.739149 master-0 kubenswrapper[38936]: I0216 21:38:20.738802 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq2bg\" (UniqueName: \"kubernetes.io/projected/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-kube-api-access-nq2bg\") pod \"nova-cell0-db-create-jb9gg\" (UID: \"a7f3ca2c-2ba6-4148-a4e8-843943926a5c\") " pod="openstack/nova-cell0-db-create-jb9gg" Feb 16 21:38:20.740188 master-0 kubenswrapper[38936]: I0216 21:38:20.739956 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-operator-scripts\") pod \"nova-cell0-db-create-jb9gg\" (UID: \"a7f3ca2c-2ba6-4148-a4e8-843943926a5c\") " pod="openstack/nova-cell0-db-create-jb9gg" Feb 16 21:38:20.761115 master-0 kubenswrapper[38936]: I0216 21:38:20.761072 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mswds\" (UniqueName: \"kubernetes.io/projected/aab575a9-488c-44b1-a7e0-3025fa81207e-kube-api-access-mswds\") pod \"nova-api-db-create-fntqx\" (UID: \"aab575a9-488c-44b1-a7e0-3025fa81207e\") " pod="openstack/nova-api-db-create-fntqx" Feb 16 21:38:20.773007 master-0 kubenswrapper[38936]: I0216 21:38:20.772941 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq2bg\" (UniqueName: \"kubernetes.io/projected/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-kube-api-access-nq2bg\") pod \"nova-cell0-db-create-jb9gg\" (UID: \"a7f3ca2c-2ba6-4148-a4e8-843943926a5c\") " pod="openstack/nova-cell0-db-create-jb9gg" Feb 16 21:38:20.776227 master-0 kubenswrapper[38936]: I0216 21:38:20.776181 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fntqx" Feb 16 21:38:20.792141 master-0 kubenswrapper[38936]: I0216 21:38:20.791740 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-z4z2j"] Feb 16 21:38:20.793828 master-0 kubenswrapper[38936]: I0216 21:38:20.793800 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z4z2j" Feb 16 21:38:20.813911 master-0 kubenswrapper[38936]: I0216 21:38:20.813822 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e2a2-account-create-update-t5ggp"] Feb 16 21:38:20.817193 master-0 kubenswrapper[38936]: I0216 21:38:20.816691 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e2a2-account-create-update-t5ggp" Feb 16 21:38:20.819042 master-0 kubenswrapper[38936]: I0216 21:38:20.819000 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 21:38:20.848574 master-0 kubenswrapper[38936]: I0216 21:38:20.842485 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtj56\" (UniqueName: \"kubernetes.io/projected/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-kube-api-access-mtj56\") pod \"nova-cell1-db-create-z4z2j\" (UID: \"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f\") " pod="openstack/nova-cell1-db-create-z4z2j" Feb 16 21:38:20.848574 master-0 kubenswrapper[38936]: I0216 21:38:20.842623 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-operator-scripts\") pod \"nova-cell1-db-create-z4z2j\" (UID: \"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f\") " pod="openstack/nova-cell1-db-create-z4z2j" Feb 16 21:38:20.848574 master-0 kubenswrapper[38936]: I0216 21:38:20.842702 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64fgj\" (UniqueName: \"kubernetes.io/projected/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-kube-api-access-64fgj\") pod \"nova-api-e2a2-account-create-update-t5ggp\" (UID: \"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a\") " pod="openstack/nova-api-e2a2-account-create-update-t5ggp" Feb 16 21:38:20.848574 master-0 kubenswrapper[38936]: I0216 21:38:20.842772 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-operator-scripts\") pod \"nova-api-e2a2-account-create-update-t5ggp\" (UID: \"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a\") " pod="openstack/nova-api-e2a2-account-create-update-t5ggp" Feb 16 21:38:20.877977 master-0 kubenswrapper[38936]: I0216 21:38:20.877846 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z4z2j"] Feb 16 21:38:20.901768 master-0 kubenswrapper[38936]: I0216 21:38:20.897322 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e2a2-account-create-update-t5ggp"] Feb 16 21:38:20.935738 master-0 kubenswrapper[38936]: I0216 21:38:20.935392 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jb9gg" Feb 16 21:38:20.975706 master-0 kubenswrapper[38936]: I0216 21:38:20.972562 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64fgj\" (UniqueName: \"kubernetes.io/projected/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-kube-api-access-64fgj\") pod \"nova-api-e2a2-account-create-update-t5ggp\" (UID: \"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a\") " pod="openstack/nova-api-e2a2-account-create-update-t5ggp" Feb 16 21:38:20.975706 master-0 kubenswrapper[38936]: I0216 21:38:20.972788 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-operator-scripts\") pod \"nova-api-e2a2-account-create-update-t5ggp\" (UID: \"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a\") " pod="openstack/nova-api-e2a2-account-create-update-t5ggp" Feb 16 21:38:20.975706 master-0 kubenswrapper[38936]: I0216 21:38:20.973365 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtj56\" (UniqueName: \"kubernetes.io/projected/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-kube-api-access-mtj56\") pod \"nova-cell1-db-create-z4z2j\" (UID: \"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f\") " pod="openstack/nova-cell1-db-create-z4z2j" Feb 16 21:38:20.975706 master-0 kubenswrapper[38936]: I0216 21:38:20.973524 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-operator-scripts\") pod \"nova-cell1-db-create-z4z2j\" (UID: \"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f\") " pod="openstack/nova-cell1-db-create-z4z2j" Feb 16 21:38:20.980711 master-0 kubenswrapper[38936]: I0216 21:38:20.979791 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-operator-scripts\") pod \"nova-api-e2a2-account-create-update-t5ggp\" (UID: \"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a\") " pod="openstack/nova-api-e2a2-account-create-update-t5ggp" Feb 16 21:38:20.982817 master-0 kubenswrapper[38936]: I0216 21:38:20.981560 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-operator-scripts\") pod \"nova-cell1-db-create-z4z2j\" (UID: \"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f\") " pod="openstack/nova-cell1-db-create-z4z2j" Feb 16 21:38:21.032708 master-0 kubenswrapper[38936]: I0216 21:38:21.023738 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b871-account-create-update-96b65"] Feb 16 21:38:21.032708 master-0 kubenswrapper[38936]: I0216 21:38:21.025476 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b871-account-create-update-96b65" Feb 16 21:38:21.048934 master-0 kubenswrapper[38936]: I0216 21:38:21.036273 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 21:38:21.067702 master-0 kubenswrapper[38936]: I0216 21:38:21.057835 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64fgj\" (UniqueName: \"kubernetes.io/projected/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-kube-api-access-64fgj\") pod \"nova-api-e2a2-account-create-update-t5ggp\" (UID: \"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a\") " pod="openstack/nova-api-e2a2-account-create-update-t5ggp" Feb 16 21:38:21.079668 master-0 kubenswrapper[38936]: I0216 21:38:21.076439 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtj56\" (UniqueName: \"kubernetes.io/projected/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-kube-api-access-mtj56\") pod \"nova-cell1-db-create-z4z2j\" (UID: \"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f\") " pod="openstack/nova-cell1-db-create-z4z2j" Feb 16 21:38:21.111709 master-0 kubenswrapper[38936]: I0216 21:38:21.102020 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b871-account-create-update-96b65"] Feb 16 21:38:21.194705 master-0 kubenswrapper[38936]: I0216 21:38:21.188292 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctmgd\" (UniqueName: \"kubernetes.io/projected/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-kube-api-access-ctmgd\") pod \"nova-cell0-b871-account-create-update-96b65\" (UID: \"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029\") " pod="openstack/nova-cell0-b871-account-create-update-96b65" Feb 16 21:38:21.194705 master-0 kubenswrapper[38936]: I0216 21:38:21.188576 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-operator-scripts\") pod \"nova-cell0-b871-account-create-update-96b65\" (UID: \"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029\") " pod="openstack/nova-cell0-b871-account-create-update-96b65" Feb 16 21:38:21.213476 master-0 kubenswrapper[38936]: I0216 21:38:21.211798 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ded7-account-create-update-dv4vx"] Feb 16 21:38:21.217701 master-0 kubenswrapper[38936]: I0216 21:38:21.213880 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" Feb 16 21:38:21.218728 master-0 kubenswrapper[38936]: I0216 21:38:21.217882 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 21:38:21.228422 master-0 kubenswrapper[38936]: I0216 21:38:21.226470 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z4z2j" Feb 16 21:38:21.242928 master-0 kubenswrapper[38936]: I0216 21:38:21.242858 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ded7-account-create-update-dv4vx"] Feb 16 21:38:21.243422 master-0 kubenswrapper[38936]: I0216 21:38:21.243378 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e2a2-account-create-update-t5ggp" Feb 16 21:38:21.292952 master-0 kubenswrapper[38936]: I0216 21:38:21.291247 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpcc6\" (UniqueName: \"kubernetes.io/projected/0db6a508-ec90-49da-867e-ada0192b7b35-kube-api-access-fpcc6\") pod \"nova-cell1-ded7-account-create-update-dv4vx\" (UID: \"0db6a508-ec90-49da-867e-ada0192b7b35\") " pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" Feb 16 21:38:21.292952 master-0 kubenswrapper[38936]: I0216 21:38:21.291442 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-operator-scripts\") pod \"nova-cell0-b871-account-create-update-96b65\" (UID: \"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029\") " pod="openstack/nova-cell0-b871-account-create-update-96b65" Feb 16 21:38:21.292952 master-0 kubenswrapper[38936]: I0216 21:38:21.291660 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctmgd\" (UniqueName: \"kubernetes.io/projected/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-kube-api-access-ctmgd\") pod \"nova-cell0-b871-account-create-update-96b65\" (UID: \"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029\") " pod="openstack/nova-cell0-b871-account-create-update-96b65" Feb 16 21:38:21.292952 master-0 kubenswrapper[38936]: I0216 21:38:21.291712 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0db6a508-ec90-49da-867e-ada0192b7b35-operator-scripts\") pod \"nova-cell1-ded7-account-create-update-dv4vx\" (UID: \"0db6a508-ec90-49da-867e-ada0192b7b35\") " pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" Feb 16 21:38:21.297769 master-0 kubenswrapper[38936]: I0216 21:38:21.297720 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-operator-scripts\") pod \"nova-cell0-b871-account-create-update-96b65\" (UID: \"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029\") " pod="openstack/nova-cell0-b871-account-create-update-96b65" Feb 16 21:38:21.315850 master-0 kubenswrapper[38936]: I0216 21:38:21.315803 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctmgd\" (UniqueName: \"kubernetes.io/projected/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-kube-api-access-ctmgd\") pod \"nova-cell0-b871-account-create-update-96b65\" (UID: \"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029\") " pod="openstack/nova-cell0-b871-account-create-update-96b65" Feb 16 21:38:21.396711 master-0 kubenswrapper[38936]: I0216 21:38:21.396495 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0db6a508-ec90-49da-867e-ada0192b7b35-operator-scripts\") pod \"nova-cell1-ded7-account-create-update-dv4vx\" (UID: \"0db6a508-ec90-49da-867e-ada0192b7b35\") " pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" Feb 16 21:38:21.397181 master-0 kubenswrapper[38936]: I0216 21:38:21.396857 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpcc6\" (UniqueName: \"kubernetes.io/projected/0db6a508-ec90-49da-867e-ada0192b7b35-kube-api-access-fpcc6\") pod \"nova-cell1-ded7-account-create-update-dv4vx\" (UID: \"0db6a508-ec90-49da-867e-ada0192b7b35\") " pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" Feb 16 21:38:21.397996 master-0 kubenswrapper[38936]: I0216 21:38:21.397959 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0db6a508-ec90-49da-867e-ada0192b7b35-operator-scripts\") pod \"nova-cell1-ded7-account-create-update-dv4vx\" (UID: \"0db6a508-ec90-49da-867e-ada0192b7b35\") " pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" Feb 16 21:38:21.421347 master-0 kubenswrapper[38936]: I0216 21:38:21.421291 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpcc6\" (UniqueName: \"kubernetes.io/projected/0db6a508-ec90-49da-867e-ada0192b7b35-kube-api-access-fpcc6\") pod \"nova-cell1-ded7-account-create-update-dv4vx\" (UID: \"0db6a508-ec90-49da-867e-ada0192b7b35\") " pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" Feb 16 21:38:21.440716 master-0 kubenswrapper[38936]: I0216 21:38:21.440606 38936 generic.go:334] "Generic (PLEG): container finished" podID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerID="e2b479992138ac47d435ec9a072aa32d0628cce1504f072683f7e43f4379f95a" exitCode=143 Feb 16 21:38:21.440985 master-0 kubenswrapper[38936]: I0216 21:38:21.440742 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85df85647b-4lmvj" event={"ID":"28720828-7566-4fb7-a4ff-ac6e548d9408","Type":"ContainerDied","Data":"e2b479992138ac47d435ec9a072aa32d0628cce1504f072683f7e43f4379f95a"} Feb 16 21:38:21.563915 master-0 kubenswrapper[38936]: I0216 21:38:21.563729 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b871-account-create-update-96b65" Feb 16 21:38:21.584330 master-0 kubenswrapper[38936]: I0216 21:38:21.584231 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" Feb 16 21:38:22.468317 master-0 kubenswrapper[38936]: I0216 21:38:22.468186 38936 generic.go:334] "Generic (PLEG): container finished" podID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerID="157e64f8cf2685cad3ffbb0a0891c2fa817cc2ba34fb2b694cca8cb0f39044c0" exitCode=0 Feb 16 21:38:22.468995 master-0 kubenswrapper[38936]: I0216 21:38:22.468321 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-external-api-0" event={"ID":"8318eb20-824e-49c4-87b3-36784a1fc4db","Type":"ContainerDied","Data":"157e64f8cf2685cad3ffbb0a0891c2fa817cc2ba34fb2b694cca8cb0f39044c0"} Feb 16 21:38:22.661519 master-0 kubenswrapper[38936]: I0216 21:38:22.661431 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-1d7ec-default-external-api-0" podUID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.215:9292/healthcheck\": dial tcp 10.128.0.215:9292: connect: connection refused" Feb 16 21:38:22.662436 master-0 kubenswrapper[38936]: I0216 21:38:22.662334 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-1d7ec-default-external-api-0" podUID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.215:9292/healthcheck\": dial tcp 10.128.0.215:9292: connect: connection refused" Feb 16 21:38:24.154739 master-0 kubenswrapper[38936]: I0216 21:38:24.152445 38936 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:24.154739 master-0 kubenswrapper[38936]: I0216 21:38:24.153286 38936 scope.go:117] "RemoveContainer" containerID="fb05b673f156b593e3b1b5a2aae5b398fe33f469a0be1c0338b6e9e0eaa7f21f" Feb 16 21:38:24.154739 master-0 kubenswrapper[38936]: E0216 21:38:24.153609 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-57f476567b-fwqws_openstack(cfcdcd18-dd01-45c8-afd4-ec72a986d582)\"" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" podUID="cfcdcd18-dd01-45c8-afd4-ec72a986d582" Feb 16 21:38:24.154739 master-0 kubenswrapper[38936]: I0216 21:38:24.154057 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:24.355688 master-0 kubenswrapper[38936]: I0216 21:38:24.355514 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:38:24.503502 master-0 kubenswrapper[38936]: I0216 21:38:24.503440 38936 scope.go:117] "RemoveContainer" containerID="fb05b673f156b593e3b1b5a2aae5b398fe33f469a0be1c0338b6e9e0eaa7f21f" Feb 16 21:38:24.504362 master-0 kubenswrapper[38936]: E0216 21:38:24.503920 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-57f476567b-fwqws_openstack(cfcdcd18-dd01-45c8-afd4-ec72a986d582)\"" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" podUID="cfcdcd18-dd01-45c8-afd4-ec72a986d582" Feb 16 21:38:26.469519 master-0 kubenswrapper[38936]: I0216 21:38:26.468945 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:26.474919 master-0 kubenswrapper[38936]: I0216 21:38:26.470834 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7fd65686d6-7ht5b" Feb 16 21:38:27.560624 master-0 kubenswrapper[38936]: I0216 21:38:27.560503 38936 generic.go:334] "Generic (PLEG): container finished" podID="185cbfbd-402e-4012-9c97-0a8f3a579e74" containerID="4142046c75ddc9a8651c805744e802ebb8644b2edddd407e01c1c47a8f65783a" exitCode=137 Feb 16 21:38:27.560624 master-0 kubenswrapper[38936]: I0216 21:38:27.560595 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-api-0" event={"ID":"185cbfbd-402e-4012-9c97-0a8f3a579e74","Type":"ContainerDied","Data":"4142046c75ddc9a8651c805744e802ebb8644b2edddd407e01c1c47a8f65783a"} Feb 16 21:38:27.704398 master-0 kubenswrapper[38936]: I0216 21:38:27.704217 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:38:27.704623 master-0 kubenswrapper[38936]: I0216 21:38:27.704530 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-1d7ec-default-internal-api-0" podUID="16c40a4d-e01e-40ac-bd7e-c7056d2392f4" containerName="glance-log" containerID="cri-o://39234f0cc3943afbdb3f4dfd525bd2d427fe5085c005694387daf45e1d641373" gracePeriod=30 Feb 16 21:38:27.704925 master-0 kubenswrapper[38936]: I0216 21:38:27.704871 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-1d7ec-default-internal-api-0" podUID="16c40a4d-e01e-40ac-bd7e-c7056d2392f4" containerName="glance-httpd" containerID="cri-o://06bf1f0bf2a217af9bd9e3932057f9c868db7c343ac1afea3392c2c3484b6523" gracePeriod=30 Feb 16 21:38:27.931324 master-0 kubenswrapper[38936]: I0216 21:38:27.931250 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-9c692-api-0" podUID="185cbfbd-402e-4012-9c97-0a8f3a579e74" containerName="cinder-api" probeResult="failure" output="Get \"http://10.128.0.225:8776/healthcheck\": dial tcp 10.128.0.225:8776: connect: connection refused" Feb 16 21:38:28.586034 master-0 kubenswrapper[38936]: I0216 21:38:28.585303 38936 generic.go:334] "Generic (PLEG): container finished" podID="16c40a4d-e01e-40ac-bd7e-c7056d2392f4" containerID="39234f0cc3943afbdb3f4dfd525bd2d427fe5085c005694387daf45e1d641373" exitCode=143 Feb 16 21:38:28.586034 master-0 kubenswrapper[38936]: I0216 21:38:28.585397 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-internal-api-0" event={"ID":"16c40a4d-e01e-40ac-bd7e-c7056d2392f4","Type":"ContainerDied","Data":"39234f0cc3943afbdb3f4dfd525bd2d427fe5085c005694387daf45e1d641373"} Feb 16 21:38:29.149168 master-0 kubenswrapper[38936]: I0216 21:38:29.148982 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:29.265915 master-0 kubenswrapper[38936]: I0216 21:38:29.265038 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64f58d4d57-rmp7g" Feb 16 21:38:29.288402 master-0 kubenswrapper[38936]: I0216 21:38:29.288351 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/28720828-7566-4fb7-a4ff-ac6e548d9408-etc-podinfo\") pod \"28720828-7566-4fb7-a4ff-ac6e548d9408\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " Feb 16 21:38:29.288686 master-0 kubenswrapper[38936]: I0216 21:38:29.288431 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm5k7\" (UniqueName: \"kubernetes.io/projected/28720828-7566-4fb7-a4ff-ac6e548d9408-kube-api-access-sm5k7\") pod \"28720828-7566-4fb7-a4ff-ac6e548d9408\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " Feb 16 21:38:29.288686 master-0 kubenswrapper[38936]: I0216 21:38:29.288465 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-custom\") pod \"28720828-7566-4fb7-a4ff-ac6e548d9408\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " Feb 16 21:38:29.288686 master-0 kubenswrapper[38936]: I0216 21:38:29.288598 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data\") pod \"28720828-7566-4fb7-a4ff-ac6e548d9408\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " Feb 16 21:38:29.288686 master-0 kubenswrapper[38936]: I0216 21:38:29.288641 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-logs\") pod \"28720828-7566-4fb7-a4ff-ac6e548d9408\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " Feb 16 21:38:29.292394 master-0 kubenswrapper[38936]: I0216 21:38:29.288716 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-merged\") pod \"28720828-7566-4fb7-a4ff-ac6e548d9408\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " Feb 16 21:38:29.292394 master-0 kubenswrapper[38936]: I0216 21:38:29.288833 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-combined-ca-bundle\") pod \"28720828-7566-4fb7-a4ff-ac6e548d9408\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " Feb 16 21:38:29.292394 master-0 kubenswrapper[38936]: I0216 21:38:29.288865 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-scripts\") pod \"28720828-7566-4fb7-a4ff-ac6e548d9408\" (UID: \"28720828-7566-4fb7-a4ff-ac6e548d9408\") " Feb 16 21:38:29.296001 master-0 kubenswrapper[38936]: I0216 21:38:29.295957 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "28720828-7566-4fb7-a4ff-ac6e548d9408" (UID: "28720828-7566-4fb7-a4ff-ac6e548d9408"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:29.296507 master-0 kubenswrapper[38936]: I0216 21:38:29.296490 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-logs" (OuterVolumeSpecName: "logs") pod "28720828-7566-4fb7-a4ff-ac6e548d9408" (UID: "28720828-7566-4fb7-a4ff-ac6e548d9408"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:29.326685 master-0 kubenswrapper[38936]: I0216 21:38:29.313056 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-scripts" (OuterVolumeSpecName: "scripts") pod "28720828-7566-4fb7-a4ff-ac6e548d9408" (UID: "28720828-7566-4fb7-a4ff-ac6e548d9408"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:29.326685 master-0 kubenswrapper[38936]: I0216 21:38:29.314126 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "28720828-7566-4fb7-a4ff-ac6e548d9408" (UID: "28720828-7566-4fb7-a4ff-ac6e548d9408"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:29.326685 master-0 kubenswrapper[38936]: I0216 21:38:29.315060 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/28720828-7566-4fb7-a4ff-ac6e548d9408-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "28720828-7566-4fb7-a4ff-ac6e548d9408" (UID: "28720828-7566-4fb7-a4ff-ac6e548d9408"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 21:38:29.326685 master-0 kubenswrapper[38936]: I0216 21:38:29.315534 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28720828-7566-4fb7-a4ff-ac6e548d9408-kube-api-access-sm5k7" (OuterVolumeSpecName: "kube-api-access-sm5k7") pod "28720828-7566-4fb7-a4ff-ac6e548d9408" (UID: "28720828-7566-4fb7-a4ff-ac6e548d9408"). InnerVolumeSpecName "kube-api-access-sm5k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:29.336684 master-0 kubenswrapper[38936]: I0216 21:38:29.327902 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data" (OuterVolumeSpecName: "config-data") pod "28720828-7566-4fb7-a4ff-ac6e548d9408" (UID: "28720828-7566-4fb7-a4ff-ac6e548d9408"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:29.393292 master-0 kubenswrapper[38936]: I0216 21:38:29.388778 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64949f9d84-p7hqz"] Feb 16 21:38:29.393292 master-0 kubenswrapper[38936]: I0216 21:38:29.389425 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64949f9d84-p7hqz" podUID="6d470e92-7826-4314-9ecb-7b37cd11b8e2" containerName="neutron-api" containerID="cri-o://47ea8ef4cdc91a083bbba85843b6f6710d5786053128103ed8cf484c75a6e412" gracePeriod=30 Feb 16 21:38:29.393292 master-0 kubenswrapper[38936]: I0216 21:38:29.389591 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64949f9d84-p7hqz" podUID="6d470e92-7826-4314-9ecb-7b37cd11b8e2" containerName="neutron-httpd" containerID="cri-o://defb9a28af561a177f019316552118ccc95154f90eb18819e2620510b24eccd8" gracePeriod=30 Feb 16 21:38:29.393821 master-0 kubenswrapper[38936]: I0216 21:38:29.393709 38936 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/28720828-7566-4fb7-a4ff-ac6e548d9408-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.393821 master-0 kubenswrapper[38936]: I0216 21:38:29.393750 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm5k7\" (UniqueName: \"kubernetes.io/projected/28720828-7566-4fb7-a4ff-ac6e548d9408-kube-api-access-sm5k7\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.393821 master-0 kubenswrapper[38936]: I0216 21:38:29.393765 38936 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.393821 master-0 kubenswrapper[38936]: I0216 21:38:29.393776 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.393821 master-0 kubenswrapper[38936]: I0216 21:38:29.393788 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.393821 master-0 kubenswrapper[38936]: I0216 21:38:29.393799 38936 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/28720828-7566-4fb7-a4ff-ac6e548d9408-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.393821 master-0 kubenswrapper[38936]: I0216 21:38:29.393810 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.451758 master-0 kubenswrapper[38936]: I0216 21:38:29.451411 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28720828-7566-4fb7-a4ff-ac6e548d9408" (UID: "28720828-7566-4fb7-a4ff-ac6e548d9408"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:29.485436 master-0 kubenswrapper[38936]: I0216 21:38:29.485397 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-api-0" Feb 16 21:38:29.496358 master-0 kubenswrapper[38936]: I0216 21:38:29.496293 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28720828-7566-4fb7-a4ff-ac6e548d9408-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.614016 master-0 kubenswrapper[38936]: I0216 21:38:29.597797 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-combined-ca-bundle\") pod \"185cbfbd-402e-4012-9c97-0a8f3a579e74\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " Feb 16 21:38:29.614016 master-0 kubenswrapper[38936]: I0216 21:38:29.598050 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-scripts\") pod \"185cbfbd-402e-4012-9c97-0a8f3a579e74\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " Feb 16 21:38:29.614016 master-0 kubenswrapper[38936]: I0216 21:38:29.598087 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/185cbfbd-402e-4012-9c97-0a8f3a579e74-etc-machine-id\") pod \"185cbfbd-402e-4012-9c97-0a8f3a579e74\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " Feb 16 21:38:29.614016 master-0 kubenswrapper[38936]: I0216 21:38:29.598115 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7nfw\" (UniqueName: \"kubernetes.io/projected/185cbfbd-402e-4012-9c97-0a8f3a579e74-kube-api-access-g7nfw\") pod \"185cbfbd-402e-4012-9c97-0a8f3a579e74\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " Feb 16 21:38:29.614016 master-0 kubenswrapper[38936]: I0216 21:38:29.598160 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/185cbfbd-402e-4012-9c97-0a8f3a579e74-logs\") pod \"185cbfbd-402e-4012-9c97-0a8f3a579e74\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " Feb 16 21:38:29.614016 master-0 kubenswrapper[38936]: I0216 21:38:29.598758 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data-custom\") pod \"185cbfbd-402e-4012-9c97-0a8f3a579e74\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " Feb 16 21:38:29.614016 master-0 kubenswrapper[38936]: I0216 21:38:29.598797 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data\") pod \"185cbfbd-402e-4012-9c97-0a8f3a579e74\" (UID: \"185cbfbd-402e-4012-9c97-0a8f3a579e74\") " Feb 16 21:38:29.614016 master-0 kubenswrapper[38936]: I0216 21:38:29.605534 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/185cbfbd-402e-4012-9c97-0a8f3a579e74-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "185cbfbd-402e-4012-9c97-0a8f3a579e74" (UID: "185cbfbd-402e-4012-9c97-0a8f3a579e74"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 21:38:29.627055 master-0 kubenswrapper[38936]: I0216 21:38:29.624456 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/185cbfbd-402e-4012-9c97-0a8f3a579e74-logs" (OuterVolumeSpecName: "logs") pod "185cbfbd-402e-4012-9c97-0a8f3a579e74" (UID: "185cbfbd-402e-4012-9c97-0a8f3a579e74"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:29.627055 master-0 kubenswrapper[38936]: I0216 21:38:29.626170 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "185cbfbd-402e-4012-9c97-0a8f3a579e74" (UID: "185cbfbd-402e-4012-9c97-0a8f3a579e74"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:29.636310 master-0 kubenswrapper[38936]: I0216 21:38:29.636254 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-85df85647b-4lmvj" event={"ID":"28720828-7566-4fb7-a4ff-ac6e548d9408","Type":"ContainerDied","Data":"cf5de38d88f3ad6d7c59afb5c6c5fcc1bafd6fa43797f11bac7fb4e596126f68"} Feb 16 21:38:29.636473 master-0 kubenswrapper[38936]: I0216 21:38:29.636457 38936 scope.go:117] "RemoveContainer" containerID="3f333dfc41a573efffb2f25b161cfbaac916d708857817df4fab44fd0d1e6f6c" Feb 16 21:38:29.636713 master-0 kubenswrapper[38936]: I0216 21:38:29.636700 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-85df85647b-4lmvj" Feb 16 21:38:29.642535 master-0 kubenswrapper[38936]: I0216 21:38:29.642492 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-scripts" (OuterVolumeSpecName: "scripts") pod "185cbfbd-402e-4012-9c97-0a8f3a579e74" (UID: "185cbfbd-402e-4012-9c97-0a8f3a579e74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:29.645550 master-0 kubenswrapper[38936]: I0216 21:38:29.645516 38936 generic.go:334] "Generic (PLEG): container finished" podID="6d470e92-7826-4314-9ecb-7b37cd11b8e2" containerID="defb9a28af561a177f019316552118ccc95154f90eb18819e2620510b24eccd8" exitCode=0 Feb 16 21:38:29.645654 master-0 kubenswrapper[38936]: I0216 21:38:29.645600 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64949f9d84-p7hqz" event={"ID":"6d470e92-7826-4314-9ecb-7b37cd11b8e2","Type":"ContainerDied","Data":"defb9a28af561a177f019316552118ccc95154f90eb18819e2620510b24eccd8"} Feb 16 21:38:29.648649 master-0 kubenswrapper[38936]: I0216 21:38:29.647400 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185cbfbd-402e-4012-9c97-0a8f3a579e74-kube-api-access-g7nfw" (OuterVolumeSpecName: "kube-api-access-g7nfw") pod "185cbfbd-402e-4012-9c97-0a8f3a579e74" (UID: "185cbfbd-402e-4012-9c97-0a8f3a579e74"). InnerVolumeSpecName "kube-api-access-g7nfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:29.659812 master-0 kubenswrapper[38936]: I0216 21:38:29.659685 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-api-0" event={"ID":"185cbfbd-402e-4012-9c97-0a8f3a579e74","Type":"ContainerDied","Data":"40353b6c5450546e487346f9c132c9e9c4cf6ab9a1e9d28af68dacc99cfc106e"} Feb 16 21:38:29.659812 master-0 kubenswrapper[38936]: I0216 21:38:29.659811 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-api-0" Feb 16 21:38:29.676150 master-0 kubenswrapper[38936]: I0216 21:38:29.675212 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "185cbfbd-402e-4012-9c97-0a8f3a579e74" (UID: "185cbfbd-402e-4012-9c97-0a8f3a579e74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:29.700997 master-0 kubenswrapper[38936]: I0216 21:38:29.700921 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data" (OuterVolumeSpecName: "config-data") pod "185cbfbd-402e-4012-9c97-0a8f3a579e74" (UID: "185cbfbd-402e-4012-9c97-0a8f3a579e74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:29.703504 master-0 kubenswrapper[38936]: I0216 21:38:29.701858 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.703504 master-0 kubenswrapper[38936]: I0216 21:38:29.701914 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.703504 master-0 kubenswrapper[38936]: I0216 21:38:29.701926 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.703504 master-0 kubenswrapper[38936]: I0216 21:38:29.701937 38936 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/185cbfbd-402e-4012-9c97-0a8f3a579e74-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.703504 master-0 kubenswrapper[38936]: I0216 21:38:29.701950 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7nfw\" (UniqueName: \"kubernetes.io/projected/185cbfbd-402e-4012-9c97-0a8f3a579e74-kube-api-access-g7nfw\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.703504 master-0 kubenswrapper[38936]: I0216 21:38:29.701964 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/185cbfbd-402e-4012-9c97-0a8f3a579e74-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.703504 master-0 kubenswrapper[38936]: I0216 21:38:29.701975 38936 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/185cbfbd-402e-4012-9c97-0a8f3a579e74-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:29.742835 master-0 kubenswrapper[38936]: I0216 21:38:29.717744 38936 scope.go:117] "RemoveContainer" containerID="e2b479992138ac47d435ec9a072aa32d0628cce1504f072683f7e43f4379f95a" Feb 16 21:38:29.803082 master-0 kubenswrapper[38936]: I0216 21:38:29.801946 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-85df85647b-4lmvj"] Feb 16 21:38:29.828742 master-0 kubenswrapper[38936]: I0216 21:38:29.819266 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-85df85647b-4lmvj"] Feb 16 21:38:29.872196 master-0 kubenswrapper[38936]: I0216 21:38:29.872143 38936 scope.go:117] "RemoveContainer" containerID="eb29028f3c54d2a0e8cd40193476c5d4ad54902304945a2e128e9ce200884da7" Feb 16 21:38:29.989923 master-0 kubenswrapper[38936]: I0216 21:38:29.972219 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" path="/var/lib/kubelet/pods/28720828-7566-4fb7-a4ff-ac6e548d9408/volumes" Feb 16 21:38:30.014168 master-0 kubenswrapper[38936]: I0216 21:38:29.999415 38936 scope.go:117] "RemoveContainer" containerID="4142046c75ddc9a8651c805744e802ebb8644b2edddd407e01c1c47a8f65783a" Feb 16 21:38:30.119998 master-0 kubenswrapper[38936]: I0216 21:38:30.117914 38936 scope.go:117] "RemoveContainer" containerID="d0d0184874ecb5fa7b62272d79af51318cdad8380341f19c4a14140dfab50e9f" Feb 16 21:38:30.183732 master-0 kubenswrapper[38936]: I0216 21:38:30.183497 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9c692-api-0"] Feb 16 21:38:30.193894 master-0 kubenswrapper[38936]: I0216 21:38:30.193225 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:30.297673 master-0 kubenswrapper[38936]: I0216 21:38:30.294076 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9c692-api-0"] Feb 16 21:38:30.335755 master-0 kubenswrapper[38936]: I0216 21:38:30.326781 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"8318eb20-824e-49c4-87b3-36784a1fc4db\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " Feb 16 21:38:30.335755 master-0 kubenswrapper[38936]: I0216 21:38:30.326894 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-config-data\") pod \"8318eb20-824e-49c4-87b3-36784a1fc4db\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " Feb 16 21:38:30.335755 master-0 kubenswrapper[38936]: I0216 21:38:30.326934 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-public-tls-certs\") pod \"8318eb20-824e-49c4-87b3-36784a1fc4db\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " Feb 16 21:38:30.335755 master-0 kubenswrapper[38936]: I0216 21:38:30.327072 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bbf8\" (UniqueName: \"kubernetes.io/projected/8318eb20-824e-49c4-87b3-36784a1fc4db-kube-api-access-6bbf8\") pod \"8318eb20-824e-49c4-87b3-36784a1fc4db\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " Feb 16 21:38:30.335755 master-0 kubenswrapper[38936]: I0216 21:38:30.327144 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-httpd-run\") pod \"8318eb20-824e-49c4-87b3-36784a1fc4db\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " Feb 16 21:38:30.335755 master-0 kubenswrapper[38936]: I0216 21:38:30.327244 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-logs\") pod \"8318eb20-824e-49c4-87b3-36784a1fc4db\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " Feb 16 21:38:30.335755 master-0 kubenswrapper[38936]: I0216 21:38:30.327372 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-combined-ca-bundle\") pod \"8318eb20-824e-49c4-87b3-36784a1fc4db\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " Feb 16 21:38:30.335755 master-0 kubenswrapper[38936]: I0216 21:38:30.327430 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-scripts\") pod \"8318eb20-824e-49c4-87b3-36784a1fc4db\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") " Feb 16 21:38:30.335755 master-0 kubenswrapper[38936]: I0216 21:38:30.328377 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-logs" (OuterVolumeSpecName: "logs") pod "8318eb20-824e-49c4-87b3-36784a1fc4db" (UID: "8318eb20-824e-49c4-87b3-36784a1fc4db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:30.335755 master-0 kubenswrapper[38936]: I0216 21:38:30.329035 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8318eb20-824e-49c4-87b3-36784a1fc4db" (UID: "8318eb20-824e-49c4-87b3-36784a1fc4db"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:30.393890 master-0 kubenswrapper[38936]: I0216 21:38:30.345367 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9c692-api-0"] Feb 16 21:38:30.393890 master-0 kubenswrapper[38936]: E0216 21:38:30.346115 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="ironic-api-log" Feb 16 21:38:30.393890 master-0 kubenswrapper[38936]: I0216 21:38:30.346141 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="ironic-api-log" Feb 16 21:38:30.393890 master-0 kubenswrapper[38936]: E0216 21:38:30.346185 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185cbfbd-402e-4012-9c97-0a8f3a579e74" containerName="cinder-9c692-api-log" Feb 16 21:38:30.393890 master-0 kubenswrapper[38936]: I0216 21:38:30.346194 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="185cbfbd-402e-4012-9c97-0a8f3a579e74" containerName="cinder-9c692-api-log" Feb 16 21:38:30.393890 master-0 kubenswrapper[38936]: I0216 21:38:30.351897 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:30.393890 master-0 kubenswrapper[38936]: I0216 21:38:30.351939 38936 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8318eb20-824e-49c4-87b3-36784a1fc4db-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:30.393890 master-0 kubenswrapper[38936]: I0216 21:38:30.357261 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8318eb20-824e-49c4-87b3-36784a1fc4db-kube-api-access-6bbf8" (OuterVolumeSpecName: "kube-api-access-6bbf8") pod "8318eb20-824e-49c4-87b3-36784a1fc4db" (UID: "8318eb20-824e-49c4-87b3-36784a1fc4db"). InnerVolumeSpecName "kube-api-access-6bbf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: E0216 21:38:30.400520 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="ironic-api" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.400572 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="ironic-api" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: E0216 21:38:30.400621 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="init" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.400628 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="init" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: E0216 21:38:30.400692 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerName="glance-log" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.400700 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerName="glance-log" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: E0216 21:38:30.400746 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerName="glance-httpd" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.400755 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerName="glance-httpd" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: E0216 21:38:30.400766 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185cbfbd-402e-4012-9c97-0a8f3a579e74" containerName="cinder-api" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.400773 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="185cbfbd-402e-4012-9c97-0a8f3a579e74" containerName="cinder-api" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.401523 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="185cbfbd-402e-4012-9c97-0a8f3a579e74" containerName="cinder-9c692-api-log" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.401563 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="ironic-api" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.401577 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="ironic-api-log" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.401587 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerName="glance-httpd" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.401600 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="185cbfbd-402e-4012-9c97-0a8f3a579e74" containerName="cinder-api" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.401611 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="ironic-api" Feb 16 21:38:30.401879 master-0 kubenswrapper[38936]: I0216 21:38:30.401636 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="8318eb20-824e-49c4-87b3-36784a1fc4db" containerName="glance-log" Feb 16 21:38:30.402762 master-0 kubenswrapper[38936]: E0216 21:38:30.401927 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="ironic-api" Feb 16 21:38:30.402762 master-0 kubenswrapper[38936]: I0216 21:38:30.401944 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="28720828-7566-4fb7-a4ff-ac6e548d9408" containerName="ironic-api" Feb 16 21:38:30.403854 master-0 kubenswrapper[38936]: I0216 21:38:30.403204 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.412408 master-0 kubenswrapper[38936]: I0216 21:38:30.407408 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-api-0"] Feb 16 21:38:30.412408 master-0 kubenswrapper[38936]: I0216 21:38:30.409481 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-9c692-api-config-data" Feb 16 21:38:30.412408 master-0 kubenswrapper[38936]: I0216 21:38:30.410083 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 21:38:30.412408 master-0 kubenswrapper[38936]: I0216 21:38:30.410289 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 21:38:30.442605 master-0 kubenswrapper[38936]: I0216 21:38:30.435550 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e2a2-account-create-update-t5ggp"] Feb 16 21:38:30.462843 master-0 kubenswrapper[38936]: I0216 21:38:30.455580 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ded7-account-create-update-dv4vx"] Feb 16 21:38:30.462843 master-0 kubenswrapper[38936]: I0216 21:38:30.455738 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929" (OuterVolumeSpecName: "glance") pod "8318eb20-824e-49c4-87b3-36784a1fc4db" (UID: "8318eb20-824e-49c4-87b3-36784a1fc4db"). InnerVolumeSpecName "pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:38:30.483122 master-0 kubenswrapper[38936]: E0216 21:38:30.467774 38936 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"8318eb20-824e-49c4-87b3-36784a1fc4db\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") : UnmountVolume.NewUnmounter failed for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"8318eb20-824e-49c4-87b3-36784a1fc4db\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/8318eb20-824e-49c4-87b3-36784a1fc4db/volumes/kubernetes.io~csi/pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/8318eb20-824e-49c4-87b3-36784a1fc4db/volumes/kubernetes.io~csi/pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1/vol_data.json]: open /var/lib/kubelet/pods/8318eb20-824e-49c4-87b3-36784a1fc4db/volumes/kubernetes.io~csi/pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"8318eb20-824e-49c4-87b3-36784a1fc4db\" (UID: \"8318eb20-824e-49c4-87b3-36784a1fc4db\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/8318eb20-824e-49c4-87b3-36784a1fc4db/volumes/kubernetes.io~csi/pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/8318eb20-824e-49c4-87b3-36784a1fc4db/volumes/kubernetes.io~csi/pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1/vol_data.json]: open /var/lib/kubelet/pods/8318eb20-824e-49c4-87b3-36784a1fc4db/volumes/kubernetes.io~csi/pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1/vol_data.json: no such file or directory" Feb 16 21:38:30.483122 master-0 kubenswrapper[38936]: I0216 21:38:30.469037 38936 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") on node \"master-0\" " Feb 16 21:38:30.483122 master-0 kubenswrapper[38936]: I0216 21:38:30.469062 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bbf8\" (UniqueName: \"kubernetes.io/projected/8318eb20-824e-49c4-87b3-36784a1fc4db-kube-api-access-6bbf8\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:30.483122 master-0 kubenswrapper[38936]: I0216 21:38:30.472059 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-scripts" (OuterVolumeSpecName: "scripts") pod "8318eb20-824e-49c4-87b3-36784a1fc4db" (UID: "8318eb20-824e-49c4-87b3-36784a1fc4db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:30.483122 master-0 kubenswrapper[38936]: W0216 21:38:30.474613 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaab575a9_488c_44b1_a7e0_3025fa81207e.slice/crio-73e0dee932269ba122c81708626642519e0aec420a4ec435e5b178795aaa3690 WatchSource:0}: Error finding container 73e0dee932269ba122c81708626642519e0aec420a4ec435e5b178795aaa3690: Status 404 returned error can't find the container with id 73e0dee932269ba122c81708626642519e0aec420a4ec435e5b178795aaa3690 Feb 16 21:38:30.483122 master-0 kubenswrapper[38936]: I0216 21:38:30.475752 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fntqx"] Feb 16 21:38:30.570928 master-0 kubenswrapper[38936]: I0216 21:38:30.570864 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-config-data-custom\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.571135 master-0 kubenswrapper[38936]: I0216 21:38:30.570955 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-internal-tls-certs\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.571135 master-0 kubenswrapper[38936]: I0216 21:38:30.570978 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-etc-machine-id\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.571135 master-0 kubenswrapper[38936]: I0216 21:38:30.571018 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-public-tls-certs\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.571135 master-0 kubenswrapper[38936]: I0216 21:38:30.571055 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-config-data\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.571135 master-0 kubenswrapper[38936]: I0216 21:38:30.571081 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-scripts\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.571135 master-0 kubenswrapper[38936]: I0216 21:38:30.571099 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-logs\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.571135 master-0 kubenswrapper[38936]: I0216 21:38:30.571074 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jb9gg"] Feb 16 21:38:30.571397 master-0 kubenswrapper[38936]: I0216 21:38:30.571229 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m2g8\" (UniqueName: \"kubernetes.io/projected/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-kube-api-access-4m2g8\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.571397 master-0 kubenswrapper[38936]: I0216 21:38:30.571296 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-combined-ca-bundle\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.571397 master-0 kubenswrapper[38936]: I0216 21:38:30.571375 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:30.667699 master-0 kubenswrapper[38936]: I0216 21:38:30.667322 38936 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:38:30.667699 master-0 kubenswrapper[38936]: I0216 21:38:30.667549 38936 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1" (UniqueName: "kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929") on node "master-0" Feb 16 21:38:30.675374 master-0 kubenswrapper[38936]: I0216 21:38:30.675299 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-public-tls-certs\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.675477 master-0 kubenswrapper[38936]: I0216 21:38:30.675393 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-config-data\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.675836 master-0 kubenswrapper[38936]: I0216 21:38:30.675756 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-scripts\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.675836 master-0 kubenswrapper[38936]: I0216 21:38:30.675799 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-logs\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.675987 master-0 kubenswrapper[38936]: I0216 21:38:30.675863 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m2g8\" (UniqueName: \"kubernetes.io/projected/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-kube-api-access-4m2g8\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.675987 master-0 kubenswrapper[38936]: I0216 21:38:30.675957 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-combined-ca-bundle\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.677638 master-0 kubenswrapper[38936]: I0216 21:38:30.676070 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-config-data-custom\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.677638 master-0 kubenswrapper[38936]: I0216 21:38:30.676136 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-internal-tls-certs\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.677638 master-0 kubenswrapper[38936]: I0216 21:38:30.676379 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-etc-machine-id\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.677638 master-0 kubenswrapper[38936]: I0216 21:38:30.676516 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-etc-machine-id\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.677638 master-0 kubenswrapper[38936]: I0216 21:38:30.676879 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-logs\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.677638 master-0 kubenswrapper[38936]: I0216 21:38:30.677520 38936 reconciler_common.go:293] "Volume detached for volume \"pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:30.683187 master-0 kubenswrapper[38936]: I0216 21:38:30.682178 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-combined-ca-bundle\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.683334 master-0 kubenswrapper[38936]: I0216 21:38:30.683290 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-public-tls-certs\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.697014 master-0 kubenswrapper[38936]: I0216 21:38:30.696239 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-config-data-custom\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.697014 master-0 kubenswrapper[38936]: I0216 21:38:30.696239 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-scripts\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.697014 master-0 kubenswrapper[38936]: I0216 21:38:30.696945 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-config-data\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.698019 master-0 kubenswrapper[38936]: I0216 21:38:30.697626 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fntqx" event={"ID":"aab575a9-488c-44b1-a7e0-3025fa81207e","Type":"ContainerStarted","Data":"73e0dee932269ba122c81708626642519e0aec420a4ec435e5b178795aaa3690"} Feb 16 21:38:30.701534 master-0 kubenswrapper[38936]: I0216 21:38:30.701477 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-internal-tls-certs\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.702298 master-0 kubenswrapper[38936]: I0216 21:38:30.701857 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m2g8\" (UniqueName: \"kubernetes.io/projected/4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c-kube-api-access-4m2g8\") pod \"cinder-9c692-api-0\" (UID: \"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c\") " pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.705458 master-0 kubenswrapper[38936]: I0216 21:38:30.705406 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8318eb20-824e-49c4-87b3-36784a1fc4db" (UID: "8318eb20-824e-49c4-87b3-36784a1fc4db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:30.714350 master-0 kubenswrapper[38936]: I0216 21:38:30.713427 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-external-api-0" event={"ID":"8318eb20-824e-49c4-87b3-36784a1fc4db","Type":"ContainerDied","Data":"8a6dccdaf65fc9444bc3a4f5eb91f94e0ad72ed6e74bba96a27684b9e6f5378f"} Feb 16 21:38:30.714350 master-0 kubenswrapper[38936]: I0216 21:38:30.713500 38936 scope.go:117] "RemoveContainer" containerID="157e64f8cf2685cad3ffbb0a0891c2fa817cc2ba34fb2b694cca8cb0f39044c0" Feb 16 21:38:30.714350 master-0 kubenswrapper[38936]: I0216 21:38:30.713680 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:30.720680 master-0 kubenswrapper[38936]: I0216 21:38:30.720569 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8318eb20-824e-49c4-87b3-36784a1fc4db" (UID: "8318eb20-824e-49c4-87b3-36784a1fc4db"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:30.720885 master-0 kubenswrapper[38936]: I0216 21:38:30.720847 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e2a2-account-create-update-t5ggp" event={"ID":"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a","Type":"ContainerStarted","Data":"be1536b5918f43f8783c5cb159d4ab51297c0d0ce3c31f2783b8d464d4b8f360"} Feb 16 21:38:30.727881 master-0 kubenswrapper[38936]: I0216 21:38:30.727831 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" event={"ID":"0db6a508-ec90-49da-867e-ada0192b7b35","Type":"ContainerStarted","Data":"23ddbbd4c566c822928566c3c1a46febc06872e79a2d5ce71c1032b0a8753fa2"} Feb 16 21:38:30.746944 master-0 kubenswrapper[38936]: I0216 21:38:30.746871 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jb9gg" event={"ID":"a7f3ca2c-2ba6-4148-a4e8-843943926a5c","Type":"ContainerStarted","Data":"e9da84d3ff3dc9cee2d04d554cd5090710383361497dac4d1665fee4c92e8be7"} Feb 16 21:38:30.769727 master-0 kubenswrapper[38936]: I0216 21:38:30.769104 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.869283702 podStartE2EDuration="18.769069744s" podCreationTimestamp="2026-02-16 21:38:12 +0000 UTC" firstStartedPulling="2026-02-16 21:38:13.536815012 +0000 UTC m=+923.888818374" lastFinishedPulling="2026-02-16 21:38:29.436601054 +0000 UTC m=+939.788604416" observedRunningTime="2026-02-16 21:38:30.749562893 +0000 UTC m=+941.101566255" watchObservedRunningTime="2026-02-16 21:38:30.769069744 +0000 UTC m=+941.121073106" Feb 16 21:38:30.779523 master-0 kubenswrapper[38936]: I0216 21:38:30.777468 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9c692-api-0" Feb 16 21:38:30.782606 master-0 kubenswrapper[38936]: I0216 21:38:30.781594 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:30.782606 master-0 kubenswrapper[38936]: I0216 21:38:30.781667 38936 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:30.793731 master-0 kubenswrapper[38936]: I0216 21:38:30.793682 38936 scope.go:117] "RemoveContainer" containerID="6f4e8719f5527ad2ee86d1f241ab0cc69e1583c0ea0856d330823fb8bceaa9db" Feb 16 21:38:30.826789 master-0 kubenswrapper[38936]: I0216 21:38:30.824519 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-config-data" (OuterVolumeSpecName: "config-data") pod "8318eb20-824e-49c4-87b3-36784a1fc4db" (UID: "8318eb20-824e-49c4-87b3-36784a1fc4db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:30.885113 master-0 kubenswrapper[38936]: I0216 21:38:30.885038 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8318eb20-824e-49c4-87b3-36784a1fc4db-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:30.966085 master-0 kubenswrapper[38936]: I0216 21:38:30.966011 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z4z2j"] Feb 16 21:38:30.995488 master-0 kubenswrapper[38936]: I0216 21:38:30.993584 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b871-account-create-update-96b65"] Feb 16 21:38:31.010159 master-0 kubenswrapper[38936]: I0216 21:38:31.010086 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-87hwd"] Feb 16 21:38:31.215051 master-0 kubenswrapper[38936]: I0216 21:38:31.214775 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1d7ec-default-external-api-0"] Feb 16 21:38:31.245554 master-0 kubenswrapper[38936]: I0216 21:38:31.245493 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1d7ec-default-external-api-0"] Feb 16 21:38:31.282822 master-0 kubenswrapper[38936]: I0216 21:38:31.282742 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1d7ec-default-external-api-0"] Feb 16 21:38:31.290048 master-0 kubenswrapper[38936]: I0216 21:38:31.289795 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.294329 master-0 kubenswrapper[38936]: I0216 21:38:31.294276 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 21:38:31.297684 master-0 kubenswrapper[38936]: I0216 21:38:31.296552 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-1d7ec-default-external-config-data" Feb 16 21:38:31.313334 master-0 kubenswrapper[38936]: I0216 21:38:31.312710 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d7ec-default-external-api-0"] Feb 16 21:38:31.410387 master-0 kubenswrapper[38936]: I0216 21:38:31.404324 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-config-data\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.410387 master-0 kubenswrapper[38936]: I0216 21:38:31.406165 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.410387 master-0 kubenswrapper[38936]: I0216 21:38:31.406235 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f970815-5d27-4567-bd44-9d6f9cf10774-httpd-run\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.410387 master-0 kubenswrapper[38936]: I0216 21:38:31.406627 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f970815-5d27-4567-bd44-9d6f9cf10774-logs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.410387 master-0 kubenswrapper[38936]: I0216 21:38:31.407246 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-scripts\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.410387 master-0 kubenswrapper[38936]: I0216 21:38:31.407469 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-combined-ca-bundle\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.410387 master-0 kubenswrapper[38936]: I0216 21:38:31.407578 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x7pk\" (UniqueName: \"kubernetes.io/projected/8f970815-5d27-4567-bd44-9d6f9cf10774-kube-api-access-4x7pk\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.410387 master-0 kubenswrapper[38936]: I0216 21:38:31.407680 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-public-tls-certs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.475949 master-0 kubenswrapper[38936]: I0216 21:38:31.474878 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9c692-api-0"] Feb 16 21:38:31.514071 master-0 kubenswrapper[38936]: I0216 21:38:31.513404 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.514071 master-0 kubenswrapper[38936]: I0216 21:38:31.513469 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f970815-5d27-4567-bd44-9d6f9cf10774-httpd-run\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.514071 master-0 kubenswrapper[38936]: I0216 21:38:31.513508 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f970815-5d27-4567-bd44-9d6f9cf10774-logs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.514071 master-0 kubenswrapper[38936]: I0216 21:38:31.513551 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-scripts\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.514071 master-0 kubenswrapper[38936]: I0216 21:38:31.513578 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-combined-ca-bundle\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.514071 master-0 kubenswrapper[38936]: I0216 21:38:31.513604 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x7pk\" (UniqueName: \"kubernetes.io/projected/8f970815-5d27-4567-bd44-9d6f9cf10774-kube-api-access-4x7pk\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.514071 master-0 kubenswrapper[38936]: I0216 21:38:31.513620 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-public-tls-certs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.514071 master-0 kubenswrapper[38936]: I0216 21:38:31.513805 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-config-data\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.518864 master-0 kubenswrapper[38936]: I0216 21:38:31.517141 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f970815-5d27-4567-bd44-9d6f9cf10774-httpd-run\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.518864 master-0 kubenswrapper[38936]: I0216 21:38:31.517549 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f970815-5d27-4567-bd44-9d6f9cf10774-logs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.519780 master-0 kubenswrapper[38936]: I0216 21:38:31.519632 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:38:31.519780 master-0 kubenswrapper[38936]: I0216 21:38:31.519747 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1fe54bb4bfd47e48e9eb50fd4126dca5c82c0ada3f6db7cb95b93ce09b27a92c/globalmount\"" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.520151 master-0 kubenswrapper[38936]: I0216 21:38:31.520113 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-scripts\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.520788 master-0 kubenswrapper[38936]: I0216 21:38:31.520763 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-config-data\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.525539 master-0 kubenswrapper[38936]: I0216 21:38:31.524525 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-combined-ca-bundle\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.530222 master-0 kubenswrapper[38936]: I0216 21:38:31.530175 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f970815-5d27-4567-bd44-9d6f9cf10774-public-tls-certs\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.538975 master-0 kubenswrapper[38936]: I0216 21:38:31.538600 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x7pk\" (UniqueName: \"kubernetes.io/projected/8f970815-5d27-4567-bd44-9d6f9cf10774-kube-api-access-4x7pk\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:31.812403 master-0 kubenswrapper[38936]: I0216 21:38:31.812300 38936 generic.go:334] "Generic (PLEG): container finished" podID="aab575a9-488c-44b1-a7e0-3025fa81207e" containerID="668d2c640bf4c89454b88017e17a004ac5e14dc8cd9345121af2bde3b20fbdda" exitCode=0 Feb 16 21:38:31.814068 master-0 kubenswrapper[38936]: I0216 21:38:31.812422 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fntqx" event={"ID":"aab575a9-488c-44b1-a7e0-3025fa81207e","Type":"ContainerDied","Data":"668d2c640bf4c89454b88017e17a004ac5e14dc8cd9345121af2bde3b20fbdda"} Feb 16 21:38:31.824796 master-0 kubenswrapper[38936]: I0216 21:38:31.822766 38936 generic.go:334] "Generic (PLEG): container finished" podID="16c40a4d-e01e-40ac-bd7e-c7056d2392f4" containerID="06bf1f0bf2a217af9bd9e3932057f9c868db7c343ac1afea3392c2c3484b6523" exitCode=0 Feb 16 21:38:31.824796 master-0 kubenswrapper[38936]: I0216 21:38:31.822863 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-internal-api-0" event={"ID":"16c40a4d-e01e-40ac-bd7e-c7056d2392f4","Type":"ContainerDied","Data":"06bf1f0bf2a217af9bd9e3932057f9c868db7c343ac1afea3392c2c3484b6523"} Feb 16 21:38:31.860257 master-0 kubenswrapper[38936]: I0216 21:38:31.860173 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z4z2j" event={"ID":"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f","Type":"ContainerStarted","Data":"1dcbb750fb4d6c61c6558f56f8b6f8fd7f2512ae5bd77d4476574a037f983813"} Feb 16 21:38:31.889804 master-0 kubenswrapper[38936]: I0216 21:38:31.886178 38936 generic.go:334] "Generic (PLEG): container finished" podID="0db6a508-ec90-49da-867e-ada0192b7b35" containerID="a22ada46eda3717e4eb1f7a11a86e0b28b36147c28ba992710d1837b15423542" exitCode=0 Feb 16 21:38:31.925965 master-0 kubenswrapper[38936]: I0216 21:38:31.925885 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="185cbfbd-402e-4012-9c97-0a8f3a579e74" path="/var/lib/kubelet/pods/185cbfbd-402e-4012-9c97-0a8f3a579e74/volumes" Feb 16 21:38:31.933084 master-0 kubenswrapper[38936]: I0216 21:38:31.933008 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8318eb20-824e-49c4-87b3-36784a1fc4db" path="/var/lib/kubelet/pods/8318eb20-824e-49c4-87b3-36784a1fc4db/volumes" Feb 16 21:38:31.934017 master-0 kubenswrapper[38936]: I0216 21:38:31.933979 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e2a2-account-create-update-t5ggp" event={"ID":"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a","Type":"ContainerStarted","Data":"d2822b18fa4a8af4c98959626419283766110698c9eaa4a873c43153b1bdfe43"} Feb 16 21:38:31.934087 master-0 kubenswrapper[38936]: I0216 21:38:31.934020 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" event={"ID":"0db6a508-ec90-49da-867e-ada0192b7b35","Type":"ContainerDied","Data":"a22ada46eda3717e4eb1f7a11a86e0b28b36147c28ba992710d1837b15423542"} Feb 16 21:38:31.934087 master-0 kubenswrapper[38936]: I0216 21:38:31.934041 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"57c3a511-13e6-460a-9912-7b5ec3ca97fd","Type":"ContainerStarted","Data":"bdb28172acc9e237dbce4d3900373efac09fb26bbcc17da1ab46b547962c1372"} Feb 16 21:38:31.945479 master-0 kubenswrapper[38936]: I0216 21:38:31.943372 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:31.945479 master-0 kubenswrapper[38936]: I0216 21:38:31.945070 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b871-account-create-update-96b65" event={"ID":"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029","Type":"ContainerStarted","Data":"3a2fc8e4f70642dca00a5f04194593b8697e1fcaad6b330abe76095d35f17f34"} Feb 16 21:38:31.975145 master-0 kubenswrapper[38936]: I0216 21:38:31.974669 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-api-0" event={"ID":"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c","Type":"ContainerStarted","Data":"6efa9ae28b319bfdd488f615be720ca0854fac84572f25d193f260e6035f43b0"} Feb 16 21:38:31.985356 master-0 kubenswrapper[38936]: I0216 21:38:31.984611 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-87hwd" event={"ID":"650c4ac6-fc3c-4a97-871d-65c399538b17","Type":"ContainerStarted","Data":"5e99ff0549d7445c0f207252f2959a456b468f3c8a96b59cb1e7c8fbb62f775e"} Feb 16 21:38:31.999869 master-0 kubenswrapper[38936]: I0216 21:38:31.999781 38936 generic.go:334] "Generic (PLEG): container finished" podID="a7f3ca2c-2ba6-4148-a4e8-843943926a5c" containerID="19121314053b1bf064fccd4d8d9f4cbe141490f9bc0946e122e0dafd6025290a" exitCode=0 Feb 16 21:38:31.999869 master-0 kubenswrapper[38936]: I0216 21:38:31.999883 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jb9gg" event={"ID":"a7f3ca2c-2ba6-4148-a4e8-843943926a5c","Type":"ContainerDied","Data":"19121314053b1bf064fccd4d8d9f4cbe141490f9bc0946e122e0dafd6025290a"} Feb 16 21:38:32.035628 master-0 kubenswrapper[38936]: I0216 21:38:32.034053 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-internal-tls-certs\") pod \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " Feb 16 21:38:32.035628 master-0 kubenswrapper[38936]: I0216 21:38:32.034199 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-logs\") pod \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " Feb 16 21:38:32.035628 master-0 kubenswrapper[38936]: I0216 21:38:32.034290 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-httpd-run\") pod \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " Feb 16 21:38:32.035628 master-0 kubenswrapper[38936]: I0216 21:38:32.034419 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-combined-ca-bundle\") pod \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " Feb 16 21:38:32.035628 master-0 kubenswrapper[38936]: I0216 21:38:32.034582 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-config-data\") pod \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " Feb 16 21:38:32.035628 master-0 kubenswrapper[38936]: I0216 21:38:32.034913 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " Feb 16 21:38:32.035628 master-0 kubenswrapper[38936]: I0216 21:38:32.035021 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krrvp\" (UniqueName: \"kubernetes.io/projected/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-kube-api-access-krrvp\") pod \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " Feb 16 21:38:32.035628 master-0 kubenswrapper[38936]: I0216 21:38:32.035073 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-scripts\") pod \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\" (UID: \"16c40a4d-e01e-40ac-bd7e-c7056d2392f4\") " Feb 16 21:38:32.050074 master-0 kubenswrapper[38936]: I0216 21:38:32.048271 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-logs" (OuterVolumeSpecName: "logs") pod "16c40a4d-e01e-40ac-bd7e-c7056d2392f4" (UID: "16c40a4d-e01e-40ac-bd7e-c7056d2392f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:32.058594 master-0 kubenswrapper[38936]: I0216 21:38:32.058451 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-scripts" (OuterVolumeSpecName: "scripts") pod "16c40a4d-e01e-40ac-bd7e-c7056d2392f4" (UID: "16c40a4d-e01e-40ac-bd7e-c7056d2392f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:32.058594 master-0 kubenswrapper[38936]: I0216 21:38:32.058493 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "16c40a4d-e01e-40ac-bd7e-c7056d2392f4" (UID: "16c40a4d-e01e-40ac-bd7e-c7056d2392f4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:32.067406 master-0 kubenswrapper[38936]: I0216 21:38:32.067267 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-kube-api-access-krrvp" (OuterVolumeSpecName: "kube-api-access-krrvp") pod "16c40a4d-e01e-40ac-bd7e-c7056d2392f4" (UID: "16c40a4d-e01e-40ac-bd7e-c7056d2392f4"). InnerVolumeSpecName "kube-api-access-krrvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:32.078314 master-0 kubenswrapper[38936]: I0216 21:38:32.075241 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:32.078314 master-0 kubenswrapper[38936]: I0216 21:38:32.075475 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:32.078314 master-0 kubenswrapper[38936]: I0216 21:38:32.075489 38936 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:32.123385 master-0 kubenswrapper[38936]: I0216 21:38:32.122931 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16c40a4d-e01e-40ac-bd7e-c7056d2392f4" (UID: "16c40a4d-e01e-40ac-bd7e-c7056d2392f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:32.158476 master-0 kubenswrapper[38936]: I0216 21:38:32.155051 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-b871-account-create-update-96b65" podStartSLOduration=12.155030413 podStartE2EDuration="12.155030413s" podCreationTimestamp="2026-02-16 21:38:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:32.011130489 +0000 UTC m=+942.363133851" watchObservedRunningTime="2026-02-16 21:38:32.155030413 +0000 UTC m=+942.507033775" Feb 16 21:38:32.165607 master-0 kubenswrapper[38936]: I0216 21:38:32.161939 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "16c40a4d-e01e-40ac-bd7e-c7056d2392f4" (UID: "16c40a4d-e01e-40ac-bd7e-c7056d2392f4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:32.184514 master-0 kubenswrapper[38936]: I0216 21:38:32.178720 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:32.184514 master-0 kubenswrapper[38936]: I0216 21:38:32.178786 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krrvp\" (UniqueName: \"kubernetes.io/projected/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-kube-api-access-krrvp\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:32.184514 master-0 kubenswrapper[38936]: I0216 21:38:32.178799 38936 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:32.196391 master-0 kubenswrapper[38936]: I0216 21:38:32.188582 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-config-data" (OuterVolumeSpecName: "config-data") pod "16c40a4d-e01e-40ac-bd7e-c7056d2392f4" (UID: "16c40a4d-e01e-40ac-bd7e-c7056d2392f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:32.281748 master-0 kubenswrapper[38936]: I0216 21:38:32.279870 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c40a4d-e01e-40ac-bd7e-c7056d2392f4-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:32.555715 master-0 kubenswrapper[38936]: I0216 21:38:32.555629 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6" (OuterVolumeSpecName: "glance") pod "16c40a4d-e01e-40ac-bd7e-c7056d2392f4" (UID: "16c40a4d-e01e-40ac-bd7e-c7056d2392f4"). InnerVolumeSpecName "pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 21:38:32.588595 master-0 kubenswrapper[38936]: I0216 21:38:32.588508 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^e84d6f8d-3e6f-444e-b77b-01824a84b929\") pod \"glance-1d7ec-default-external-api-0\" (UID: \"8f970815-5d27-4567-bd44-9d6f9cf10774\") " pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:32.593287 master-0 kubenswrapper[38936]: I0216 21:38:32.593219 38936 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") on node \"master-0\" " Feb 16 21:38:32.638098 master-0 kubenswrapper[38936]: I0216 21:38:32.638042 38936 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 21:38:32.638333 master-0 kubenswrapper[38936]: I0216 21:38:32.638227 38936 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3" (UniqueName: "kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6") on node "master-0" Feb 16 21:38:32.703542 master-0 kubenswrapper[38936]: I0216 21:38:32.701981 38936 reconciler_common.go:293] "Volume detached for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:32.823084 master-0 kubenswrapper[38936]: I0216 21:38:32.822951 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:33.028197 master-0 kubenswrapper[38936]: I0216 21:38:33.028123 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-api-0" event={"ID":"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c","Type":"ContainerStarted","Data":"714f37235ce39df9c2753370f94153c354c3f2a0c83e2dec852ed432d745b899"} Feb 16 21:38:33.031153 master-0 kubenswrapper[38936]: I0216 21:38:33.031119 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-internal-api-0" event={"ID":"16c40a4d-e01e-40ac-bd7e-c7056d2392f4","Type":"ContainerDied","Data":"bfcd4dea2b2198d69ca0e3731733e72688b4186acb304307c7b054053c5a6843"} Feb 16 21:38:33.031219 master-0 kubenswrapper[38936]: I0216 21:38:33.031169 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:33.031333 master-0 kubenswrapper[38936]: I0216 21:38:33.031178 38936 scope.go:117] "RemoveContainer" containerID="06bf1f0bf2a217af9bd9e3932057f9c868db7c343ac1afea3392c2c3484b6523" Feb 16 21:38:33.034149 master-0 kubenswrapper[38936]: I0216 21:38:33.034111 38936 generic.go:334] "Generic (PLEG): container finished" podID="09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f" containerID="8eacb69ce9cfc0c612bb68d0b19df03049b3b02dc559ababa03f91b997ecfcdb" exitCode=0 Feb 16 21:38:33.034230 master-0 kubenswrapper[38936]: I0216 21:38:33.034193 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z4z2j" event={"ID":"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f","Type":"ContainerDied","Data":"8eacb69ce9cfc0c612bb68d0b19df03049b3b02dc559ababa03f91b997ecfcdb"} Feb 16 21:38:33.037402 master-0 kubenswrapper[38936]: I0216 21:38:33.037307 38936 generic.go:334] "Generic (PLEG): container finished" podID="6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a" containerID="d2822b18fa4a8af4c98959626419283766110698c9eaa4a873c43153b1bdfe43" exitCode=0 Feb 16 21:38:33.037402 master-0 kubenswrapper[38936]: I0216 21:38:33.037382 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e2a2-account-create-update-t5ggp" event={"ID":"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a","Type":"ContainerDied","Data":"d2822b18fa4a8af4c98959626419283766110698c9eaa4a873c43153b1bdfe43"} Feb 16 21:38:33.046919 master-0 kubenswrapper[38936]: I0216 21:38:33.046788 38936 generic.go:334] "Generic (PLEG): container finished" podID="6d470e92-7826-4314-9ecb-7b37cd11b8e2" containerID="47ea8ef4cdc91a083bbba85843b6f6710d5786053128103ed8cf484c75a6e412" exitCode=0 Feb 16 21:38:33.046919 master-0 kubenswrapper[38936]: I0216 21:38:33.046866 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64949f9d84-p7hqz" event={"ID":"6d470e92-7826-4314-9ecb-7b37cd11b8e2","Type":"ContainerDied","Data":"47ea8ef4cdc91a083bbba85843b6f6710d5786053128103ed8cf484c75a6e412"} Feb 16 21:38:33.049844 master-0 kubenswrapper[38936]: I0216 21:38:33.049774 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b871-account-create-update-96b65" event={"ID":"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029","Type":"ContainerDied","Data":"878599ba466651c7a04306169732c48cd9785ae3fed4557b72a98947e9a87676"} Feb 16 21:38:33.050001 master-0 kubenswrapper[38936]: I0216 21:38:33.049927 38936 generic.go:334] "Generic (PLEG): container finished" podID="70c58d2c-4204-4d3b-9d2a-fdbf35ad8029" containerID="878599ba466651c7a04306169732c48cd9785ae3fed4557b72a98947e9a87676" exitCode=0 Feb 16 21:38:33.446131 master-0 kubenswrapper[38936]: I0216 21:38:33.438970 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:38:33.483775 master-0 kubenswrapper[38936]: I0216 21:38:33.483684 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:38:33.890147 master-0 kubenswrapper[38936]: I0216 21:38:33.889991 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c40a4d-e01e-40ac-bd7e-c7056d2392f4" path="/var/lib/kubelet/pods/16c40a4d-e01e-40ac-bd7e-c7056d2392f4/volumes" Feb 16 21:38:34.094524 master-0 kubenswrapper[38936]: I0216 21:38:34.094461 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jb9gg" event={"ID":"a7f3ca2c-2ba6-4148-a4e8-843943926a5c","Type":"ContainerDied","Data":"e9da84d3ff3dc9cee2d04d554cd5090710383361497dac4d1665fee4c92e8be7"} Feb 16 21:38:34.094524 master-0 kubenswrapper[38936]: I0216 21:38:34.094522 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9da84d3ff3dc9cee2d04d554cd5090710383361497dac4d1665fee4c92e8be7" Feb 16 21:38:34.094810 master-0 kubenswrapper[38936]: I0216 21:38:34.094544 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:38:34.095498 master-0 kubenswrapper[38936]: E0216 21:38:34.095466 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c40a4d-e01e-40ac-bd7e-c7056d2392f4" containerName="glance-log" Feb 16 21:38:34.095572 master-0 kubenswrapper[38936]: I0216 21:38:34.095496 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c40a4d-e01e-40ac-bd7e-c7056d2392f4" containerName="glance-log" Feb 16 21:38:34.095609 master-0 kubenswrapper[38936]: E0216 21:38:34.095594 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c40a4d-e01e-40ac-bd7e-c7056d2392f4" containerName="glance-httpd" Feb 16 21:38:34.095609 master-0 kubenswrapper[38936]: I0216 21:38:34.095606 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c40a4d-e01e-40ac-bd7e-c7056d2392f4" containerName="glance-httpd" Feb 16 21:38:34.096072 master-0 kubenswrapper[38936]: I0216 21:38:34.096049 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c40a4d-e01e-40ac-bd7e-c7056d2392f4" containerName="glance-log" Feb 16 21:38:34.096139 master-0 kubenswrapper[38936]: I0216 21:38:34.096099 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c40a4d-e01e-40ac-bd7e-c7056d2392f4" containerName="glance-httpd" Feb 16 21:38:34.097924 master-0 kubenswrapper[38936]: I0216 21:38:34.097882 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.101370 master-0 kubenswrapper[38936]: I0216 21:38:34.101335 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 21:38:34.102016 master-0 kubenswrapper[38936]: I0216 21:38:34.101730 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-1d7ec-default-internal-config-data" Feb 16 21:38:34.102016 master-0 kubenswrapper[38936]: I0216 21:38:34.101907 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64949f9d84-p7hqz" event={"ID":"6d470e92-7826-4314-9ecb-7b37cd11b8e2","Type":"ContainerDied","Data":"675dda40a55f9fc235aacae4000beaf4e40c754cde0d54240c4ce0756b13ca6c"} Feb 16 21:38:34.102016 master-0 kubenswrapper[38936]: I0216 21:38:34.101947 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="675dda40a55f9fc235aacae4000beaf4e40c754cde0d54240c4ce0756b13ca6c" Feb 16 21:38:34.114874 master-0 kubenswrapper[38936]: I0216 21:38:34.114787 38936 scope.go:117] "RemoveContainer" containerID="39234f0cc3943afbdb3f4dfd525bd2d427fe5085c005694387daf45e1d641373" Feb 16 21:38:34.266091 master-0 kubenswrapper[38936]: I0216 21:38:34.265761 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:38:34.272124 master-0 kubenswrapper[38936]: I0216 21:38:34.272019 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jb9gg" Feb 16 21:38:34.297111 master-0 kubenswrapper[38936]: I0216 21:38:34.297025 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:38:34.328152 master-0 kubenswrapper[38936]: I0216 21:38:34.320393 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e2a2-account-create-update-t5ggp" Feb 16 21:38:34.336103 master-0 kubenswrapper[38936]: I0216 21:38:34.333623 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fntqx" Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.370565 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.371242 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq2bg\" (UniqueName: \"kubernetes.io/projected/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-kube-api-access-nq2bg\") pod \"a7f3ca2c-2ba6-4148-a4e8-843943926a5c\" (UID: \"a7f3ca2c-2ba6-4148-a4e8-843943926a5c\") " Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.371370 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-operator-scripts\") pod \"a7f3ca2c-2ba6-4148-a4e8-843943926a5c\" (UID: \"a7f3ca2c-2ba6-4148-a4e8-843943926a5c\") " Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.372053 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.372208 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpb5x\" (UniqueName: \"kubernetes.io/projected/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-kube-api-access-lpb5x\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.372254 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-httpd-run\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.372293 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-scripts\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.372319 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-internal-tls-certs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.372348 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-config-data\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.372438 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-logs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.372468 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-combined-ca-bundle\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.378278 master-0 kubenswrapper[38936]: I0216 21:38:34.375094 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-kube-api-access-nq2bg" (OuterVolumeSpecName: "kube-api-access-nq2bg") pod "a7f3ca2c-2ba6-4148-a4e8-843943926a5c" (UID: "a7f3ca2c-2ba6-4148-a4e8-843943926a5c"). InnerVolumeSpecName "kube-api-access-nq2bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:34.384236 master-0 kubenswrapper[38936]: I0216 21:38:34.384166 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7f3ca2c-2ba6-4148-a4e8-843943926a5c" (UID: "a7f3ca2c-2ba6-4148-a4e8-843943926a5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:34.475015 master-0 kubenswrapper[38936]: I0216 21:38:34.474904 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpcc6\" (UniqueName: \"kubernetes.io/projected/0db6a508-ec90-49da-867e-ada0192b7b35-kube-api-access-fpcc6\") pod \"0db6a508-ec90-49da-867e-ada0192b7b35\" (UID: \"0db6a508-ec90-49da-867e-ada0192b7b35\") " Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.475086 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-config\") pod \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.475145 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64fgj\" (UniqueName: \"kubernetes.io/projected/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-kube-api-access-64fgj\") pod \"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a\" (UID: \"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a\") " Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.475253 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-combined-ca-bundle\") pod \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.475334 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mswds\" (UniqueName: \"kubernetes.io/projected/aab575a9-488c-44b1-a7e0-3025fa81207e-kube-api-access-mswds\") pod \"aab575a9-488c-44b1-a7e0-3025fa81207e\" (UID: \"aab575a9-488c-44b1-a7e0-3025fa81207e\") " Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.475405 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-ovndb-tls-certs\") pod \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.475533 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-operator-scripts\") pod \"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a\" (UID: \"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a\") " Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.475576 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0db6a508-ec90-49da-867e-ada0192b7b35-operator-scripts\") pod \"0db6a508-ec90-49da-867e-ada0192b7b35\" (UID: \"0db6a508-ec90-49da-867e-ada0192b7b35\") " Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.475635 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab575a9-488c-44b1-a7e0-3025fa81207e-operator-scripts\") pod \"aab575a9-488c-44b1-a7e0-3025fa81207e\" (UID: \"aab575a9-488c-44b1-a7e0-3025fa81207e\") " Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.475696 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjxps\" (UniqueName: \"kubernetes.io/projected/6d470e92-7826-4314-9ecb-7b37cd11b8e2-kube-api-access-sjxps\") pod \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.475870 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-httpd-config\") pod \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\" (UID: \"6d470e92-7826-4314-9ecb-7b37cd11b8e2\") " Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.476684 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpb5x\" (UniqueName: \"kubernetes.io/projected/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-kube-api-access-lpb5x\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.476729 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-httpd-run\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.476759 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-internal-tls-certs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.476781 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-scripts\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.476806 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-config-data\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.476904 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-logs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.478597 master-0 kubenswrapper[38936]: I0216 21:38:34.478571 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-combined-ca-bundle\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.479163 master-0 kubenswrapper[38936]: I0216 21:38:34.478827 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.483013 master-0 kubenswrapper[38936]: I0216 21:38:34.480831 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq2bg\" (UniqueName: \"kubernetes.io/projected/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-kube-api-access-nq2bg\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.483013 master-0 kubenswrapper[38936]: I0216 21:38:34.481374 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db6a508-ec90-49da-867e-ada0192b7b35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0db6a508-ec90-49da-867e-ada0192b7b35" (UID: "0db6a508-ec90-49da-867e-ada0192b7b35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:34.483013 master-0 kubenswrapper[38936]: I0216 21:38:34.481754 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a" (UID: "6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:34.483013 master-0 kubenswrapper[38936]: I0216 21:38:34.481860 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab575a9-488c-44b1-a7e0-3025fa81207e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aab575a9-488c-44b1-a7e0-3025fa81207e" (UID: "aab575a9-488c-44b1-a7e0-3025fa81207e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:34.485177 master-0 kubenswrapper[38936]: I0216 21:38:34.484635 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db6a508-ec90-49da-867e-ada0192b7b35-kube-api-access-fpcc6" (OuterVolumeSpecName: "kube-api-access-fpcc6") pod "0db6a508-ec90-49da-867e-ada0192b7b35" (UID: "0db6a508-ec90-49da-867e-ada0192b7b35"). InnerVolumeSpecName "kube-api-access-fpcc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:34.491697 master-0 kubenswrapper[38936]: I0216 21:38:34.490992 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f3ca2c-2ba6-4148-a4e8-843943926a5c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.491697 master-0 kubenswrapper[38936]: I0216 21:38:34.491206 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-logs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.492584 master-0 kubenswrapper[38936]: I0216 21:38:34.492519 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-internal-tls-certs\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.493141 master-0 kubenswrapper[38936]: I0216 21:38:34.493089 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-httpd-run\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.499140 master-0 kubenswrapper[38936]: I0216 21:38:34.497975 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-scripts\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.500534 master-0 kubenswrapper[38936]: I0216 21:38:34.500498 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-config-data\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.503878 master-0 kubenswrapper[38936]: I0216 21:38:34.503536 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-combined-ca-bundle\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.504681 master-0 kubenswrapper[38936]: I0216 21:38:34.504529 38936 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 21:38:34.504681 master-0 kubenswrapper[38936]: I0216 21:38:34.504556 38936 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1b887ae194b0900377497cd58f52b4420ec6f7ec05c5eb1852be55020074fcad/globalmount\"" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.508604 master-0 kubenswrapper[38936]: I0216 21:38:34.504960 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-kube-api-access-64fgj" (OuterVolumeSpecName: "kube-api-access-64fgj") pod "6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a" (UID: "6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a"). InnerVolumeSpecName "kube-api-access-64fgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:34.508604 master-0 kubenswrapper[38936]: I0216 21:38:34.505206 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6d470e92-7826-4314-9ecb-7b37cd11b8e2" (UID: "6d470e92-7826-4314-9ecb-7b37cd11b8e2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:34.508604 master-0 kubenswrapper[38936]: I0216 21:38:34.506260 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab575a9-488c-44b1-a7e0-3025fa81207e-kube-api-access-mswds" (OuterVolumeSpecName: "kube-api-access-mswds") pod "aab575a9-488c-44b1-a7e0-3025fa81207e" (UID: "aab575a9-488c-44b1-a7e0-3025fa81207e"). InnerVolumeSpecName "kube-api-access-mswds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:34.511746 master-0 kubenswrapper[38936]: I0216 21:38:34.511699 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpb5x\" (UniqueName: \"kubernetes.io/projected/14ba8a70-283c-4aee-94ee-ea2a1f2c1781-kube-api-access-lpb5x\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:34.539031 master-0 kubenswrapper[38936]: I0216 21:38:34.538965 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d470e92-7826-4314-9ecb-7b37cd11b8e2-kube-api-access-sjxps" (OuterVolumeSpecName: "kube-api-access-sjxps") pod "6d470e92-7826-4314-9ecb-7b37cd11b8e2" (UID: "6d470e92-7826-4314-9ecb-7b37cd11b8e2"). InnerVolumeSpecName "kube-api-access-sjxps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:34.599089 master-0 kubenswrapper[38936]: I0216 21:38:34.599035 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mswds\" (UniqueName: \"kubernetes.io/projected/aab575a9-488c-44b1-a7e0-3025fa81207e-kube-api-access-mswds\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.599089 master-0 kubenswrapper[38936]: I0216 21:38:34.599089 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.599330 master-0 kubenswrapper[38936]: I0216 21:38:34.599100 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0db6a508-ec90-49da-867e-ada0192b7b35-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.599330 master-0 kubenswrapper[38936]: I0216 21:38:34.599251 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aab575a9-488c-44b1-a7e0-3025fa81207e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.599330 master-0 kubenswrapper[38936]: I0216 21:38:34.599268 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjxps\" (UniqueName: \"kubernetes.io/projected/6d470e92-7826-4314-9ecb-7b37cd11b8e2-kube-api-access-sjxps\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.599330 master-0 kubenswrapper[38936]: I0216 21:38:34.599278 38936 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-httpd-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.599330 master-0 kubenswrapper[38936]: I0216 21:38:34.599289 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpcc6\" (UniqueName: \"kubernetes.io/projected/0db6a508-ec90-49da-867e-ada0192b7b35-kube-api-access-fpcc6\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.599330 master-0 kubenswrapper[38936]: I0216 21:38:34.599299 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64fgj\" (UniqueName: \"kubernetes.io/projected/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a-kube-api-access-64fgj\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.620167 master-0 kubenswrapper[38936]: I0216 21:38:34.619935 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-config" (OuterVolumeSpecName: "config") pod "6d470e92-7826-4314-9ecb-7b37cd11b8e2" (UID: "6d470e92-7826-4314-9ecb-7b37cd11b8e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:34.650185 master-0 kubenswrapper[38936]: I0216 21:38:34.650073 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d470e92-7826-4314-9ecb-7b37cd11b8e2" (UID: "6d470e92-7826-4314-9ecb-7b37cd11b8e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:34.686681 master-0 kubenswrapper[38936]: I0216 21:38:34.686167 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6d470e92-7826-4314-9ecb-7b37cd11b8e2" (UID: "6d470e92-7826-4314-9ecb-7b37cd11b8e2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:34.706368 master-0 kubenswrapper[38936]: I0216 21:38:34.703331 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.706368 master-0 kubenswrapper[38936]: I0216 21:38:34.703377 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.706368 master-0 kubenswrapper[38936]: I0216 21:38:34.703410 38936 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d470e92-7826-4314-9ecb-7b37cd11b8e2-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:34.911642 master-0 kubenswrapper[38936]: I0216 21:38:34.911569 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d7ec-default-external-api-0"] Feb 16 21:38:35.135707 master-0 kubenswrapper[38936]: I0216 21:38:35.135613 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-87hwd" event={"ID":"650c4ac6-fc3c-4a97-871d-65c399538b17","Type":"ContainerStarted","Data":"8dada93288a6f6e4e101eea3a25ec2ea4fbe001e2f7d8af4b80dd2c92da4815e"} Feb 16 21:38:35.142076 master-0 kubenswrapper[38936]: I0216 21:38:35.142024 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e2a2-account-create-update-t5ggp" event={"ID":"6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a","Type":"ContainerDied","Data":"be1536b5918f43f8783c5cb159d4ab51297c0d0ce3c31f2783b8d464d4b8f360"} Feb 16 21:38:35.142076 master-0 kubenswrapper[38936]: I0216 21:38:35.142071 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1536b5918f43f8783c5cb159d4ab51297c0d0ce3c31f2783b8d464d4b8f360" Feb 16 21:38:35.142248 master-0 kubenswrapper[38936]: I0216 21:38:35.142119 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e2a2-account-create-update-t5ggp" Feb 16 21:38:35.148245 master-0 kubenswrapper[38936]: I0216 21:38:35.146162 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" Feb 16 21:38:35.148245 master-0 kubenswrapper[38936]: I0216 21:38:35.146715 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ded7-account-create-update-dv4vx" event={"ID":"0db6a508-ec90-49da-867e-ada0192b7b35","Type":"ContainerDied","Data":"23ddbbd4c566c822928566c3c1a46febc06872e79a2d5ce71c1032b0a8753fa2"} Feb 16 21:38:35.148245 master-0 kubenswrapper[38936]: I0216 21:38:35.146814 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23ddbbd4c566c822928566c3c1a46febc06872e79a2d5ce71c1032b0a8753fa2" Feb 16 21:38:35.175972 master-0 kubenswrapper[38936]: I0216 21:38:35.175896 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9c692-api-0" event={"ID":"4d4edaa3-5df3-4bb0-8d48-3493a6de7e6c","Type":"ContainerStarted","Data":"719ce46628291b2d2b83e137a8c4f6a6c8c725e0d4fd92576cddd715523360aa"} Feb 16 21:38:35.177682 master-0 kubenswrapper[38936]: I0216 21:38:35.176280 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-9c692-api-0" Feb 16 21:38:35.189528 master-0 kubenswrapper[38936]: I0216 21:38:35.186483 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-87hwd" podStartSLOduration=13.846769379 podStartE2EDuration="17.186452332s" podCreationTimestamp="2026-02-16 21:38:18 +0000 UTC" firstStartedPulling="2026-02-16 21:38:31.047112781 +0000 UTC m=+941.399116143" lastFinishedPulling="2026-02-16 21:38:34.386795734 +0000 UTC m=+944.738799096" observedRunningTime="2026-02-16 21:38:35.169599832 +0000 UTC m=+945.521603194" watchObservedRunningTime="2026-02-16 21:38:35.186452332 +0000 UTC m=+945.538455694" Feb 16 21:38:35.191329 master-0 kubenswrapper[38936]: I0216 21:38:35.191266 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jb9gg" Feb 16 21:38:35.191507 master-0 kubenswrapper[38936]: I0216 21:38:35.191438 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fntqx" event={"ID":"aab575a9-488c-44b1-a7e0-3025fa81207e","Type":"ContainerDied","Data":"73e0dee932269ba122c81708626642519e0aec420a4ec435e5b178795aaa3690"} Feb 16 21:38:35.191562 master-0 kubenswrapper[38936]: I0216 21:38:35.191505 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73e0dee932269ba122c81708626642519e0aec420a4ec435e5b178795aaa3690" Feb 16 21:38:35.192995 master-0 kubenswrapper[38936]: I0216 21:38:35.192909 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fntqx" Feb 16 21:38:35.193115 master-0 kubenswrapper[38936]: I0216 21:38:35.193092 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64949f9d84-p7hqz" Feb 16 21:38:35.228363 master-0 kubenswrapper[38936]: I0216 21:38:35.227423 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9c692-api-0" podStartSLOduration=5.227382516 podStartE2EDuration="5.227382516s" podCreationTimestamp="2026-02-16 21:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:35.212373725 +0000 UTC m=+945.564377097" watchObservedRunningTime="2026-02-16 21:38:35.227382516 +0000 UTC m=+945.579385878" Feb 16 21:38:35.293076 master-0 kubenswrapper[38936]: I0216 21:38:35.292916 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64949f9d84-p7hqz"] Feb 16 21:38:35.313133 master-0 kubenswrapper[38936]: I0216 21:38:35.312881 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64949f9d84-p7hqz"] Feb 16 21:38:35.408016 master-0 kubenswrapper[38936]: I0216 21:38:35.407952 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3\" (UniqueName: \"kubernetes.io/csi/topolvm.io^229cca5d-3ad9-49c9-bdae-2f4292c3d0f6\") pod \"glance-1d7ec-default-internal-api-0\" (UID: \"14ba8a70-283c-4aee-94ee-ea2a1f2c1781\") " pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:35.514470 master-0 kubenswrapper[38936]: I0216 21:38:35.514403 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:35.875370 master-0 kubenswrapper[38936]: I0216 21:38:35.875319 38936 scope.go:117] "RemoveContainer" containerID="fb05b673f156b593e3b1b5a2aae5b398fe33f469a0be1c0338b6e9e0eaa7f21f" Feb 16 21:38:35.895165 master-0 kubenswrapper[38936]: I0216 21:38:35.895089 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d470e92-7826-4314-9ecb-7b37cd11b8e2" path="/var/lib/kubelet/pods/6d470e92-7826-4314-9ecb-7b37cd11b8e2/volumes" Feb 16 21:38:37.218903 master-0 kubenswrapper[38936]: I0216 21:38:37.218764 38936 generic.go:334] "Generic (PLEG): container finished" podID="650c4ac6-fc3c-4a97-871d-65c399538b17" containerID="8dada93288a6f6e4e101eea3a25ec2ea4fbe001e2f7d8af4b80dd2c92da4815e" exitCode=0 Feb 16 21:38:37.219444 master-0 kubenswrapper[38936]: I0216 21:38:37.218954 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-87hwd" event={"ID":"650c4ac6-fc3c-4a97-871d-65c399538b17","Type":"ContainerDied","Data":"8dada93288a6f6e4e101eea3a25ec2ea4fbe001e2f7d8af4b80dd2c92da4815e"} Feb 16 21:38:38.239851 master-0 kubenswrapper[38936]: I0216 21:38:38.239552 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b871-account-create-update-96b65" event={"ID":"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029","Type":"ContainerDied","Data":"3a2fc8e4f70642dca00a5f04194593b8697e1fcaad6b330abe76095d35f17f34"} Feb 16 21:38:38.239851 master-0 kubenswrapper[38936]: I0216 21:38:38.239602 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a2fc8e4f70642dca00a5f04194593b8697e1fcaad6b330abe76095d35f17f34" Feb 16 21:38:38.240812 master-0 kubenswrapper[38936]: I0216 21:38:38.240634 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-external-api-0" event={"ID":"8f970815-5d27-4567-bd44-9d6f9cf10774","Type":"ContainerStarted","Data":"a0a783623b812bcde656470fe28fc36cf5164abf7b14f760431c4adbf7b05a7b"} Feb 16 21:38:38.243883 master-0 kubenswrapper[38936]: I0216 21:38:38.243823 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z4z2j" event={"ID":"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f","Type":"ContainerDied","Data":"1dcbb750fb4d6c61c6558f56f8b6f8fd7f2512ae5bd77d4476574a037f983813"} Feb 16 21:38:38.243883 master-0 kubenswrapper[38936]: I0216 21:38:38.243880 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dcbb750fb4d6c61c6558f56f8b6f8fd7f2512ae5bd77d4476574a037f983813" Feb 16 21:38:38.317110 master-0 kubenswrapper[38936]: I0216 21:38:38.317005 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b871-account-create-update-96b65" Feb 16 21:38:38.327738 master-0 kubenswrapper[38936]: I0216 21:38:38.326889 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z4z2j" Feb 16 21:38:38.421401 master-0 kubenswrapper[38936]: I0216 21:38:38.421323 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-operator-scripts\") pod \"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f\" (UID: \"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f\") " Feb 16 21:38:38.421611 master-0 kubenswrapper[38936]: I0216 21:38:38.421454 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctmgd\" (UniqueName: \"kubernetes.io/projected/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-kube-api-access-ctmgd\") pod \"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029\" (UID: \"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029\") " Feb 16 21:38:38.421743 master-0 kubenswrapper[38936]: I0216 21:38:38.421712 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtj56\" (UniqueName: \"kubernetes.io/projected/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-kube-api-access-mtj56\") pod \"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f\" (UID: \"09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f\") " Feb 16 21:38:38.421833 master-0 kubenswrapper[38936]: I0216 21:38:38.421768 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-operator-scripts\") pod \"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029\" (UID: \"70c58d2c-4204-4d3b-9d2a-fdbf35ad8029\") " Feb 16 21:38:38.424140 master-0 kubenswrapper[38936]: I0216 21:38:38.424099 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f" (UID: "09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:38.426681 master-0 kubenswrapper[38936]: I0216 21:38:38.426611 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70c58d2c-4204-4d3b-9d2a-fdbf35ad8029" (UID: "70c58d2c-4204-4d3b-9d2a-fdbf35ad8029"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:38.430202 master-0 kubenswrapper[38936]: I0216 21:38:38.430012 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-kube-api-access-mtj56" (OuterVolumeSpecName: "kube-api-access-mtj56") pod "09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f" (UID: "09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f"). InnerVolumeSpecName "kube-api-access-mtj56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:38.431120 master-0 kubenswrapper[38936]: I0216 21:38:38.431061 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-kube-api-access-ctmgd" (OuterVolumeSpecName: "kube-api-access-ctmgd") pod "70c58d2c-4204-4d3b-9d2a-fdbf35ad8029" (UID: "70c58d2c-4204-4d3b-9d2a-fdbf35ad8029"). InnerVolumeSpecName "kube-api-access-ctmgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:38.532290 master-0 kubenswrapper[38936]: I0216 21:38:38.531430 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:38.532290 master-0 kubenswrapper[38936]: I0216 21:38:38.531482 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctmgd\" (UniqueName: \"kubernetes.io/projected/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-kube-api-access-ctmgd\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:38.532290 master-0 kubenswrapper[38936]: I0216 21:38:38.531495 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtj56\" (UniqueName: \"kubernetes.io/projected/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f-kube-api-access-mtj56\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:38.532290 master-0 kubenswrapper[38936]: I0216 21:38:38.531505 38936 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:38.886871 master-0 kubenswrapper[38936]: I0216 21:38:38.886487 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:38.922761 master-0 kubenswrapper[38936]: I0216 21:38:38.922690 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1d7ec-default-internal-api-0"] Feb 16 21:38:38.971753 master-0 kubenswrapper[38936]: I0216 21:38:38.971695 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-scripts\") pod \"650c4ac6-fc3c-4a97-871d-65c399538b17\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " Feb 16 21:38:38.971882 master-0 kubenswrapper[38936]: I0216 21:38:38.971812 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jl22\" (UniqueName: \"kubernetes.io/projected/650c4ac6-fc3c-4a97-871d-65c399538b17-kube-api-access-8jl22\") pod \"650c4ac6-fc3c-4a97-871d-65c399538b17\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " Feb 16 21:38:38.971882 master-0 kubenswrapper[38936]: I0216 21:38:38.971868 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-config\") pod \"650c4ac6-fc3c-4a97-871d-65c399538b17\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " Feb 16 21:38:38.972036 master-0 kubenswrapper[38936]: I0216 21:38:38.971956 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-combined-ca-bundle\") pod \"650c4ac6-fc3c-4a97-871d-65c399538b17\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " Feb 16 21:38:38.972036 master-0 kubenswrapper[38936]: I0216 21:38:38.971993 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/650c4ac6-fc3c-4a97-871d-65c399538b17-etc-podinfo\") pod \"650c4ac6-fc3c-4a97-871d-65c399538b17\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " Feb 16 21:38:38.972141 master-0 kubenswrapper[38936]: I0216 21:38:38.972107 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic\") pod \"650c4ac6-fc3c-4a97-871d-65c399538b17\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " Feb 16 21:38:38.972287 master-0 kubenswrapper[38936]: I0216 21:38:38.972249 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"650c4ac6-fc3c-4a97-871d-65c399538b17\" (UID: \"650c4ac6-fc3c-4a97-871d-65c399538b17\") " Feb 16 21:38:38.975376 master-0 kubenswrapper[38936]: I0216 21:38:38.975331 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "650c4ac6-fc3c-4a97-871d-65c399538b17" (UID: "650c4ac6-fc3c-4a97-871d-65c399538b17"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:38.975552 master-0 kubenswrapper[38936]: I0216 21:38:38.975493 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "650c4ac6-fc3c-4a97-871d-65c399538b17" (UID: "650c4ac6-fc3c-4a97-871d-65c399538b17"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:38.977353 master-0 kubenswrapper[38936]: I0216 21:38:38.977320 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-scripts" (OuterVolumeSpecName: "scripts") pod "650c4ac6-fc3c-4a97-871d-65c399538b17" (UID: "650c4ac6-fc3c-4a97-871d-65c399538b17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:38.979259 master-0 kubenswrapper[38936]: I0216 21:38:38.979203 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650c4ac6-fc3c-4a97-871d-65c399538b17-kube-api-access-8jl22" (OuterVolumeSpecName: "kube-api-access-8jl22") pod "650c4ac6-fc3c-4a97-871d-65c399538b17" (UID: "650c4ac6-fc3c-4a97-871d-65c399538b17"). InnerVolumeSpecName "kube-api-access-8jl22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:38.979901 master-0 kubenswrapper[38936]: I0216 21:38:38.979817 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/650c4ac6-fc3c-4a97-871d-65c399538b17-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "650c4ac6-fc3c-4a97-871d-65c399538b17" (UID: "650c4ac6-fc3c-4a97-871d-65c399538b17"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 21:38:39.019878 master-0 kubenswrapper[38936]: I0216 21:38:39.019785 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "650c4ac6-fc3c-4a97-871d-65c399538b17" (UID: "650c4ac6-fc3c-4a97-871d-65c399538b17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:39.037975 master-0 kubenswrapper[38936]: I0216 21:38:39.037872 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-config" (OuterVolumeSpecName: "config") pod "650c4ac6-fc3c-4a97-871d-65c399538b17" (UID: "650c4ac6-fc3c-4a97-871d-65c399538b17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:39.075617 master-0 kubenswrapper[38936]: I0216 21:38:39.075542 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:39.075617 master-0 kubenswrapper[38936]: I0216 21:38:39.075605 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jl22\" (UniqueName: \"kubernetes.io/projected/650c4ac6-fc3c-4a97-871d-65c399538b17-kube-api-access-8jl22\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:39.075617 master-0 kubenswrapper[38936]: I0216 21:38:39.075616 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:39.075617 master-0 kubenswrapper[38936]: I0216 21:38:39.075626 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650c4ac6-fc3c-4a97-871d-65c399538b17-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:39.075956 master-0 kubenswrapper[38936]: I0216 21:38:39.075635 38936 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/650c4ac6-fc3c-4a97-871d-65c399538b17-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:39.075956 master-0 kubenswrapper[38936]: I0216 21:38:39.075648 38936 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:39.075956 master-0 kubenswrapper[38936]: I0216 21:38:39.075682 38936 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/650c4ac6-fc3c-4a97-871d-65c399538b17-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:39.264470 master-0 kubenswrapper[38936]: I0216 21:38:39.264264 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-internal-api-0" event={"ID":"14ba8a70-283c-4aee-94ee-ea2a1f2c1781","Type":"ContainerStarted","Data":"5de01ba1f79c2736aa34e5b174d70c8e8a147677b9ee38ea6d22e7662bd36166"} Feb 16 21:38:39.274292 master-0 kubenswrapper[38936]: I0216 21:38:39.274224 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"37c815ef-1c3d-4b2a-b748-de04b8c4412c","Type":"ContainerStarted","Data":"f2e0f9f3dee6f98e2d24dc02586ea4933301194f50b8d5f1d6ff8a48f8c95d4c"} Feb 16 21:38:39.282776 master-0 kubenswrapper[38936]: I0216 21:38:39.281563 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-external-api-0" event={"ID":"8f970815-5d27-4567-bd44-9d6f9cf10774","Type":"ContainerStarted","Data":"3abc5191eb27be318a6d009b0d94bfec3d2d2d0364d61cd406917e3090dceb5c"} Feb 16 21:38:39.287448 master-0 kubenswrapper[38936]: I0216 21:38:39.286940 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-87hwd" Feb 16 21:38:39.287448 master-0 kubenswrapper[38936]: I0216 21:38:39.287024 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-87hwd" event={"ID":"650c4ac6-fc3c-4a97-871d-65c399538b17","Type":"ContainerDied","Data":"5e99ff0549d7445c0f207252f2959a456b468f3c8a96b59cb1e7c8fbb62f775e"} Feb 16 21:38:39.287448 master-0 kubenswrapper[38936]: I0216 21:38:39.287110 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e99ff0549d7445c0f207252f2959a456b468f3c8a96b59cb1e7c8fbb62f775e" Feb 16 21:38:39.290420 master-0 kubenswrapper[38936]: I0216 21:38:39.290087 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" event={"ID":"cfcdcd18-dd01-45c8-afd4-ec72a986d582","Type":"ContainerStarted","Data":"68c8e47c3bece98dd28cce8f37a6fdf334b79e3e9be6fe52b5a4b7f0102bce58"} Feb 16 21:38:39.290420 master-0 kubenswrapper[38936]: I0216 21:38:39.290137 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b871-account-create-update-96b65" Feb 16 21:38:39.290420 master-0 kubenswrapper[38936]: I0216 21:38:39.290144 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z4z2j" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.708472 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-765cf7b859-fnh5l"] Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: E0216 21:38:39.709011 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c58d2c-4204-4d3b-9d2a-fdbf35ad8029" containerName="mariadb-account-create-update" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709027 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c58d2c-4204-4d3b-9d2a-fdbf35ad8029" containerName="mariadb-account-create-update" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: E0216 21:38:39.709041 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f" containerName="mariadb-database-create" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709048 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f" containerName="mariadb-database-create" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: E0216 21:38:39.709068 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650c4ac6-fc3c-4a97-871d-65c399538b17" containerName="ironic-inspector-db-sync" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709074 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="650c4ac6-fc3c-4a97-871d-65c399538b17" containerName="ironic-inspector-db-sync" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: E0216 21:38:39.709086 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db6a508-ec90-49da-867e-ada0192b7b35" containerName="mariadb-account-create-update" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709092 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db6a508-ec90-49da-867e-ada0192b7b35" containerName="mariadb-account-create-update" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: E0216 21:38:39.709113 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a" containerName="mariadb-account-create-update" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709121 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a" containerName="mariadb-account-create-update" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: E0216 21:38:39.709130 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d470e92-7826-4314-9ecb-7b37cd11b8e2" containerName="neutron-httpd" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709136 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d470e92-7826-4314-9ecb-7b37cd11b8e2" containerName="neutron-httpd" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: E0216 21:38:39.709147 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab575a9-488c-44b1-a7e0-3025fa81207e" containerName="mariadb-database-create" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709154 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab575a9-488c-44b1-a7e0-3025fa81207e" containerName="mariadb-database-create" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: E0216 21:38:39.709177 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d470e92-7826-4314-9ecb-7b37cd11b8e2" containerName="neutron-api" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709183 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d470e92-7826-4314-9ecb-7b37cd11b8e2" containerName="neutron-api" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: E0216 21:38:39.709196 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f3ca2c-2ba6-4148-a4e8-843943926a5c" containerName="mariadb-database-create" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709202 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f3ca2c-2ba6-4148-a4e8-843943926a5c" containerName="mariadb-database-create" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709427 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f" containerName="mariadb-database-create" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709453 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c58d2c-4204-4d3b-9d2a-fdbf35ad8029" containerName="mariadb-account-create-update" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709465 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d470e92-7826-4314-9ecb-7b37cd11b8e2" containerName="neutron-api" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709483 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab575a9-488c-44b1-a7e0-3025fa81207e" containerName="mariadb-database-create" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709504 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="650c4ac6-fc3c-4a97-871d-65c399538b17" containerName="ironic-inspector-db-sync" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709515 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f3ca2c-2ba6-4148-a4e8-843943926a5c" containerName="mariadb-database-create" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709525 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a" containerName="mariadb-account-create-update" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709537 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db6a508-ec90-49da-867e-ada0192b7b35" containerName="mariadb-account-create-update" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.709551 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d470e92-7826-4314-9ecb-7b37cd11b8e2" containerName="neutron-httpd" Feb 16 21:38:39.718050 master-0 kubenswrapper[38936]: I0216 21:38:39.711882 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.756235 master-0 kubenswrapper[38936]: I0216 21:38:39.756096 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-765cf7b859-fnh5l"] Feb 16 21:38:39.821044 master-0 kubenswrapper[38936]: I0216 21:38:39.820956 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-svc\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.821166 master-0 kubenswrapper[38936]: I0216 21:38:39.821138 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-config\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.821461 master-0 kubenswrapper[38936]: I0216 21:38:39.821403 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29pc4\" (UniqueName: \"kubernetes.io/projected/e9c9ee25-a478-4932-ae3f-39162f313e62-kube-api-access-29pc4\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.821524 master-0 kubenswrapper[38936]: I0216 21:38:39.821489 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-sb\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.821789 master-0 kubenswrapper[38936]: I0216 21:38:39.821750 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-swift-storage-0\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.822265 master-0 kubenswrapper[38936]: I0216 21:38:39.822222 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-nb\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.929390 master-0 kubenswrapper[38936]: I0216 21:38:39.924659 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-svc\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.929390 master-0 kubenswrapper[38936]: I0216 21:38:39.924748 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-config\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.929390 master-0 kubenswrapper[38936]: I0216 21:38:39.924798 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29pc4\" (UniqueName: \"kubernetes.io/projected/e9c9ee25-a478-4932-ae3f-39162f313e62-kube-api-access-29pc4\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.929390 master-0 kubenswrapper[38936]: I0216 21:38:39.924827 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-sb\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.929390 master-0 kubenswrapper[38936]: I0216 21:38:39.925893 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-swift-storage-0\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.929390 master-0 kubenswrapper[38936]: I0216 21:38:39.926043 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-nb\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.929390 master-0 kubenswrapper[38936]: I0216 21:38:39.927080 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-nb\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.929390 master-0 kubenswrapper[38936]: I0216 21:38:39.927634 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-svc\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.929390 master-0 kubenswrapper[38936]: I0216 21:38:39.928265 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-config\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.930838 master-0 kubenswrapper[38936]: I0216 21:38:39.930801 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-sb\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.931669 master-0 kubenswrapper[38936]: I0216 21:38:39.931617 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-swift-storage-0\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:39.953688 master-0 kubenswrapper[38936]: I0216 21:38:39.948737 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 21:38:39.966757 master-0 kubenswrapper[38936]: I0216 21:38:39.966705 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 21:38:39.997541 master-0 kubenswrapper[38936]: I0216 21:38:39.991625 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 16 21:38:39.997541 master-0 kubenswrapper[38936]: I0216 21:38:39.991873 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 16 21:38:39.997541 master-0 kubenswrapper[38936]: I0216 21:38:39.992006 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 16 21:38:40.001515 master-0 kubenswrapper[38936]: I0216 21:38:40.001459 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29pc4\" (UniqueName: \"kubernetes.io/projected/e9c9ee25-a478-4932-ae3f-39162f313e62-kube-api-access-29pc4\") pod \"dnsmasq-dns-765cf7b859-fnh5l\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:40.039818 master-0 kubenswrapper[38936]: I0216 21:38:40.039742 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 21:38:40.085869 master-0 kubenswrapper[38936]: I0216 21:38:40.085790 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:40.140681 master-0 kubenswrapper[38936]: I0216 21:38:40.133556 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.140681 master-0 kubenswrapper[38936]: I0216 21:38:40.133682 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.140681 master-0 kubenswrapper[38936]: I0216 21:38:40.133767 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pz6m\" (UniqueName: \"kubernetes.io/projected/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-kube-api-access-2pz6m\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.140681 master-0 kubenswrapper[38936]: I0216 21:38:40.133794 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-scripts\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.140681 master-0 kubenswrapper[38936]: I0216 21:38:40.133815 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.140681 master-0 kubenswrapper[38936]: I0216 21:38:40.133849 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.140681 master-0 kubenswrapper[38936]: I0216 21:38:40.133883 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-config\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.236748 master-0 kubenswrapper[38936]: I0216 21:38:40.236686 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.236962 master-0 kubenswrapper[38936]: I0216 21:38:40.236795 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.236962 master-0 kubenswrapper[38936]: I0216 21:38:40.236926 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pz6m\" (UniqueName: \"kubernetes.io/projected/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-kube-api-access-2pz6m\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.236962 master-0 kubenswrapper[38936]: I0216 21:38:40.236953 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-scripts\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.237098 master-0 kubenswrapper[38936]: I0216 21:38:40.236983 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.237098 master-0 kubenswrapper[38936]: I0216 21:38:40.237029 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.237098 master-0 kubenswrapper[38936]: I0216 21:38:40.237076 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-config\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.239687 master-0 kubenswrapper[38936]: I0216 21:38:40.239611 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.240044 master-0 kubenswrapper[38936]: I0216 21:38:40.240005 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.246112 master-0 kubenswrapper[38936]: I0216 21:38:40.244977 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-config\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.246112 master-0 kubenswrapper[38936]: I0216 21:38:40.245819 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-scripts\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.247587 master-0 kubenswrapper[38936]: I0216 21:38:40.247542 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.249689 master-0 kubenswrapper[38936]: I0216 21:38:40.249617 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.260215 master-0 kubenswrapper[38936]: I0216 21:38:40.258634 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pz6m\" (UniqueName: \"kubernetes.io/projected/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-kube-api-access-2pz6m\") pod \"ironic-inspector-0\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:40.350700 master-0 kubenswrapper[38936]: I0216 21:38:40.348407 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-external-api-0" event={"ID":"8f970815-5d27-4567-bd44-9d6f9cf10774","Type":"ContainerStarted","Data":"dae6fd36bb18d36baa65924466b2d4834ab208c16c30b2fe9c170da14ca2c76d"} Feb 16 21:38:40.364892 master-0 kubenswrapper[38936]: I0216 21:38:40.363568 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-internal-api-0" event={"ID":"14ba8a70-283c-4aee-94ee-ea2a1f2c1781","Type":"ContainerStarted","Data":"de60a765b7d53cbe1e2ce137a4552df019d62f8d90bec84ea556d3f3e247242e"} Feb 16 21:38:40.388914 master-0 kubenswrapper[38936]: I0216 21:38:40.388116 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1d7ec-default-external-api-0" podStartSLOduration=9.388097258 podStartE2EDuration="9.388097258s" podCreationTimestamp="2026-02-16 21:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:40.386788223 +0000 UTC m=+950.738791585" watchObservedRunningTime="2026-02-16 21:38:40.388097258 +0000 UTC m=+950.740100620" Feb 16 21:38:40.407813 master-0 kubenswrapper[38936]: I0216 21:38:40.407470 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 21:38:40.639904 master-0 kubenswrapper[38936]: I0216 21:38:40.638755 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-765cf7b859-fnh5l"] Feb 16 21:38:41.115289 master-0 kubenswrapper[38936]: I0216 21:38:41.115212 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 21:38:41.465463 master-0 kubenswrapper[38936]: I0216 21:38:41.461956 38936 generic.go:334] "Generic (PLEG): container finished" podID="e9c9ee25-a478-4932-ae3f-39162f313e62" containerID="ecad561e587b0e24d4e18e6cb83d0c2c69999e3669008b1c804b7a261f1ff885" exitCode=0 Feb 16 21:38:41.465463 master-0 kubenswrapper[38936]: I0216 21:38:41.462063 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" event={"ID":"e9c9ee25-a478-4932-ae3f-39162f313e62","Type":"ContainerDied","Data":"ecad561e587b0e24d4e18e6cb83d0c2c69999e3669008b1c804b7a261f1ff885"} Feb 16 21:38:41.465463 master-0 kubenswrapper[38936]: I0216 21:38:41.462092 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" event={"ID":"e9c9ee25-a478-4932-ae3f-39162f313e62","Type":"ContainerStarted","Data":"72693dec909c5055c5ead3de470a7a67ee584a3e58ddbdbe95598448fe1658f7"} Feb 16 21:38:41.503781 master-0 kubenswrapper[38936]: I0216 21:38:41.497098 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061","Type":"ContainerStarted","Data":"16a5430bbf253d4e5728df557b17edc3381fcb3befcc1e29c950ccbfc001c8f9"} Feb 16 21:38:41.534282 master-0 kubenswrapper[38936]: I0216 21:38:41.534206 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1d7ec-default-internal-api-0" event={"ID":"14ba8a70-283c-4aee-94ee-ea2a1f2c1781","Type":"ContainerStarted","Data":"981419f080cd65704f12a0c64e8e293ef7e8ac8d384138e85cf08853a5f3536f"} Feb 16 21:38:41.595391 master-0 kubenswrapper[38936]: I0216 21:38:41.595262 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1d7ec-default-internal-api-0" podStartSLOduration=8.595235721 podStartE2EDuration="8.595235721s" podCreationTimestamp="2026-02-16 21:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:41.588035558 +0000 UTC m=+951.940038920" watchObservedRunningTime="2026-02-16 21:38:41.595235721 +0000 UTC m=+951.947239083" Feb 16 21:38:41.638678 master-0 kubenswrapper[38936]: I0216 21:38:41.637368 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jjlmc"] Feb 16 21:38:41.639418 master-0 kubenswrapper[38936]: I0216 21:38:41.639381 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:41.653869 master-0 kubenswrapper[38936]: I0216 21:38:41.653811 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 21:38:41.654507 master-0 kubenswrapper[38936]: I0216 21:38:41.654491 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 21:38:41.818489 master-0 kubenswrapper[38936]: I0216 21:38:41.799296 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-scripts\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:41.818489 master-0 kubenswrapper[38936]: I0216 21:38:41.799423 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:41.818489 master-0 kubenswrapper[38936]: I0216 21:38:41.799523 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-config-data\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:41.820895 master-0 kubenswrapper[38936]: I0216 21:38:41.820824 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsg8c\" (UniqueName: \"kubernetes.io/projected/8e0cbb0a-133a-421f-9c54-a473c5446028-kube-api-access-hsg8c\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:41.865199 master-0 kubenswrapper[38936]: I0216 21:38:41.855750 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jjlmc"] Feb 16 21:38:41.938227 master-0 kubenswrapper[38936]: I0216 21:38:41.938112 38936 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf3fc7857-f230-4a40-8fb6-9b01dd29c502"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf3fc7857-f230-4a40-8fb6-9b01dd29c502] : Timed out while waiting for systemd to remove kubepods-besteffort-podf3fc7857_f230_4a40_8fb6_9b01dd29c502.slice" Feb 16 21:38:41.938227 master-0 kubenswrapper[38936]: E0216 21:38:41.938213 38936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podf3fc7857-f230-4a40-8fb6-9b01dd29c502] : unable to destroy cgroup paths for cgroup [kubepods besteffort podf3fc7857-f230-4a40-8fb6-9b01dd29c502] : Timed out while waiting for systemd to remove kubepods-besteffort-podf3fc7857_f230_4a40_8fb6_9b01dd29c502.slice" pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" podUID="f3fc7857-f230-4a40-8fb6-9b01dd29c502" Feb 16 21:38:41.938695 master-0 kubenswrapper[38936]: I0216 21:38:41.938310 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsg8c\" (UniqueName: \"kubernetes.io/projected/8e0cbb0a-133a-421f-9c54-a473c5446028-kube-api-access-hsg8c\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:41.938695 master-0 kubenswrapper[38936]: I0216 21:38:41.938419 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-scripts\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:41.938695 master-0 kubenswrapper[38936]: I0216 21:38:41.938458 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:41.938695 master-0 kubenswrapper[38936]: I0216 21:38:41.938500 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-config-data\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:41.944996 master-0 kubenswrapper[38936]: I0216 21:38:41.944895 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-config-data\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:41.945426 master-0 kubenswrapper[38936]: I0216 21:38:41.945391 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-scripts\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:41.946267 master-0 kubenswrapper[38936]: I0216 21:38:41.946227 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:42.030228 master-0 kubenswrapper[38936]: I0216 21:38:42.030158 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsg8c\" (UniqueName: \"kubernetes.io/projected/8e0cbb0a-133a-421f-9c54-a473c5446028-kube-api-access-hsg8c\") pod \"nova-cell0-conductor-db-sync-jjlmc\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:42.120117 master-0 kubenswrapper[38936]: I0216 21:38:42.119949 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:38:42.579743 master-0 kubenswrapper[38936]: I0216 21:38:42.579560 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" event={"ID":"e9c9ee25-a478-4932-ae3f-39162f313e62","Type":"ContainerStarted","Data":"ed0fa4d5633f0dc5f43cff371a242595017805baf64082d7c4e5351c4abd058a"} Feb 16 21:38:42.580741 master-0 kubenswrapper[38936]: I0216 21:38:42.580316 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:42.583680 master-0 kubenswrapper[38936]: I0216 21:38:42.583514 38936 generic.go:334] "Generic (PLEG): container finished" podID="f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" containerID="d7b2b4eda9d86c039991dc874f39e15866d16f444169027fbcfb82b6d07138d0" exitCode=0 Feb 16 21:38:42.583680 master-0 kubenswrapper[38936]: I0216 21:38:42.583640 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-1991-account-create-update-vb2d9" Feb 16 21:38:42.591181 master-0 kubenswrapper[38936]: I0216 21:38:42.585059 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061","Type":"ContainerDied","Data":"d7b2b4eda9d86c039991dc874f39e15866d16f444169027fbcfb82b6d07138d0"} Feb 16 21:38:42.823548 master-0 kubenswrapper[38936]: I0216 21:38:42.823464 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:42.823838 master-0 kubenswrapper[38936]: I0216 21:38:42.823621 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:42.833742 master-0 kubenswrapper[38936]: I0216 21:38:42.833595 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:42.864816 master-0 kubenswrapper[38936]: I0216 21:38:42.864752 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:42.872077 master-0 kubenswrapper[38936]: I0216 21:38:42.872003 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:42.909837 master-0 kubenswrapper[38936]: I0216 21:38:42.909733 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5675994476-8qnnd" Feb 16 21:38:42.972849 master-0 kubenswrapper[38936]: I0216 21:38:42.971306 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" podStartSLOduration=3.971271275 podStartE2EDuration="3.971271275s" podCreationTimestamp="2026-02-16 21:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:38:42.94339642 +0000 UTC m=+953.295399792" watchObservedRunningTime="2026-02-16 21:38:42.971271275 +0000 UTC m=+953.323274637" Feb 16 21:38:43.268731 master-0 kubenswrapper[38936]: I0216 21:38:43.266142 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jjlmc"] Feb 16 21:38:43.452268 master-0 kubenswrapper[38936]: I0216 21:38:43.452201 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7768cbd466-2k4r9"] Feb 16 21:38:43.452609 master-0 kubenswrapper[38936]: I0216 21:38:43.452485 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7768cbd466-2k4r9" podUID="96c52859-2457-4148-b87b-c6d552a3be73" containerName="placement-log" containerID="cri-o://7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62" gracePeriod=30 Feb 16 21:38:43.466924 master-0 kubenswrapper[38936]: I0216 21:38:43.452619 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7768cbd466-2k4r9" podUID="96c52859-2457-4148-b87b-c6d552a3be73" containerName="placement-api" containerID="cri-o://9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733" gracePeriod=30 Feb 16 21:38:43.614065 master-0 kubenswrapper[38936]: I0216 21:38:43.614016 38936 generic.go:334] "Generic (PLEG): container finished" podID="37c815ef-1c3d-4b2a-b748-de04b8c4412c" containerID="f2e0f9f3dee6f98e2d24dc02586ea4933301194f50b8d5f1d6ff8a48f8c95d4c" exitCode=0 Feb 16 21:38:43.614546 master-0 kubenswrapper[38936]: I0216 21:38:43.614130 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"37c815ef-1c3d-4b2a-b748-de04b8c4412c","Type":"ContainerDied","Data":"f2e0f9f3dee6f98e2d24dc02586ea4933301194f50b8d5f1d6ff8a48f8c95d4c"} Feb 16 21:38:43.619042 master-0 kubenswrapper[38936]: I0216 21:38:43.618990 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jjlmc" event={"ID":"8e0cbb0a-133a-421f-9c54-a473c5446028","Type":"ContainerStarted","Data":"f246f2b14e075b6e188b3053a7350db8c7a55510179efd0d610c669613d8e0c4"} Feb 16 21:38:43.654974 master-0 kubenswrapper[38936]: I0216 21:38:43.654775 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:43.655792 master-0 kubenswrapper[38936]: I0216 21:38:43.655083 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:44.064077 master-0 kubenswrapper[38936]: I0216 21:38:44.060795 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-9c692-api-0" Feb 16 21:38:44.165169 master-0 kubenswrapper[38936]: I0216 21:38:44.156185 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:44.205851 master-0 kubenswrapper[38936]: I0216 21:38:44.205784 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-57f476567b-fwqws" Feb 16 21:38:45.501769 master-0 kubenswrapper[38936]: I0216 21:38:45.501593 38936 generic.go:334] "Generic (PLEG): container finished" podID="96c52859-2457-4148-b87b-c6d552a3be73" containerID="7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62" exitCode=143 Feb 16 21:38:45.503332 master-0 kubenswrapper[38936]: I0216 21:38:45.502049 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7768cbd466-2k4r9" event={"ID":"96c52859-2457-4148-b87b-c6d552a3be73","Type":"ContainerDied","Data":"7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62"} Feb 16 21:38:45.516744 master-0 kubenswrapper[38936]: I0216 21:38:45.515459 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:45.516744 master-0 kubenswrapper[38936]: I0216 21:38:45.515513 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:45.589383 master-0 kubenswrapper[38936]: I0216 21:38:45.589302 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:45.589716 master-0 kubenswrapper[38936]: I0216 21:38:45.589454 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:46.058401 master-0 kubenswrapper[38936]: I0216 21:38:46.058104 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 21:38:46.527987 master-0 kubenswrapper[38936]: I0216 21:38:46.527926 38936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:38:46.528639 master-0 kubenswrapper[38936]: I0216 21:38:46.528565 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:46.530114 master-0 kubenswrapper[38936]: I0216 21:38:46.528707 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:47.095327 master-0 kubenswrapper[38936]: I0216 21:38:47.093447 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:47.330068 master-0 kubenswrapper[38936]: I0216 21:38:47.330013 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:38:47.385744 master-0 kubenswrapper[38936]: I0216 21:38:47.385059 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6dvb\" (UniqueName: \"kubernetes.io/projected/96c52859-2457-4148-b87b-c6d552a3be73-kube-api-access-x6dvb\") pod \"96c52859-2457-4148-b87b-c6d552a3be73\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " Feb 16 21:38:47.385744 master-0 kubenswrapper[38936]: I0216 21:38:47.385153 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-scripts\") pod \"96c52859-2457-4148-b87b-c6d552a3be73\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " Feb 16 21:38:47.385744 master-0 kubenswrapper[38936]: I0216 21:38:47.385385 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-internal-tls-certs\") pod \"96c52859-2457-4148-b87b-c6d552a3be73\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " Feb 16 21:38:47.385744 master-0 kubenswrapper[38936]: I0216 21:38:47.385409 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-combined-ca-bundle\") pod \"96c52859-2457-4148-b87b-c6d552a3be73\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " Feb 16 21:38:47.385744 master-0 kubenswrapper[38936]: I0216 21:38:47.385479 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-public-tls-certs\") pod \"96c52859-2457-4148-b87b-c6d552a3be73\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " Feb 16 21:38:47.385744 master-0 kubenswrapper[38936]: I0216 21:38:47.385545 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c52859-2457-4148-b87b-c6d552a3be73-logs\") pod \"96c52859-2457-4148-b87b-c6d552a3be73\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " Feb 16 21:38:47.385744 master-0 kubenswrapper[38936]: I0216 21:38:47.385595 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-config-data\") pod \"96c52859-2457-4148-b87b-c6d552a3be73\" (UID: \"96c52859-2457-4148-b87b-c6d552a3be73\") " Feb 16 21:38:47.388661 master-0 kubenswrapper[38936]: I0216 21:38:47.388532 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c52859-2457-4148-b87b-c6d552a3be73-kube-api-access-x6dvb" (OuterVolumeSpecName: "kube-api-access-x6dvb") pod "96c52859-2457-4148-b87b-c6d552a3be73" (UID: "96c52859-2457-4148-b87b-c6d552a3be73"). InnerVolumeSpecName "kube-api-access-x6dvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:47.389604 master-0 kubenswrapper[38936]: I0216 21:38:47.389069 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96c52859-2457-4148-b87b-c6d552a3be73-logs" (OuterVolumeSpecName: "logs") pod "96c52859-2457-4148-b87b-c6d552a3be73" (UID: "96c52859-2457-4148-b87b-c6d552a3be73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:47.392311 master-0 kubenswrapper[38936]: I0216 21:38:47.392270 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-scripts" (OuterVolumeSpecName: "scripts") pod "96c52859-2457-4148-b87b-c6d552a3be73" (UID: "96c52859-2457-4148-b87b-c6d552a3be73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:47.489063 master-0 kubenswrapper[38936]: I0216 21:38:47.488933 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c52859-2457-4148-b87b-c6d552a3be73-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:47.489063 master-0 kubenswrapper[38936]: I0216 21:38:47.488987 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6dvb\" (UniqueName: \"kubernetes.io/projected/96c52859-2457-4148-b87b-c6d552a3be73-kube-api-access-x6dvb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:47.489063 master-0 kubenswrapper[38936]: I0216 21:38:47.488998 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:47.495691 master-0 kubenswrapper[38936]: I0216 21:38:47.495601 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96c52859-2457-4148-b87b-c6d552a3be73" (UID: "96c52859-2457-4148-b87b-c6d552a3be73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:47.495927 master-0 kubenswrapper[38936]: I0216 21:38:47.495810 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-config-data" (OuterVolumeSpecName: "config-data") pod "96c52859-2457-4148-b87b-c6d552a3be73" (UID: "96c52859-2457-4148-b87b-c6d552a3be73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:47.557196 master-0 kubenswrapper[38936]: I0216 21:38:47.557056 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "96c52859-2457-4148-b87b-c6d552a3be73" (UID: "96c52859-2457-4148-b87b-c6d552a3be73"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:47.561137 master-0 kubenswrapper[38936]: I0216 21:38:47.561089 38936 generic.go:334] "Generic (PLEG): container finished" podID="96c52859-2457-4148-b87b-c6d552a3be73" containerID="9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733" exitCode=0 Feb 16 21:38:47.562985 master-0 kubenswrapper[38936]: I0216 21:38:47.562950 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7768cbd466-2k4r9" Feb 16 21:38:47.563626 master-0 kubenswrapper[38936]: I0216 21:38:47.563598 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7768cbd466-2k4r9" event={"ID":"96c52859-2457-4148-b87b-c6d552a3be73","Type":"ContainerDied","Data":"9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733"} Feb 16 21:38:47.563729 master-0 kubenswrapper[38936]: I0216 21:38:47.563632 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7768cbd466-2k4r9" event={"ID":"96c52859-2457-4148-b87b-c6d552a3be73","Type":"ContainerDied","Data":"a7a85301fa27089ba2f03e75f4bf929300af3a5d721fb07a4515c49970bc71f4"} Feb 16 21:38:47.563729 master-0 kubenswrapper[38936]: I0216 21:38:47.563669 38936 scope.go:117] "RemoveContainer" containerID="9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733" Feb 16 21:38:47.563844 master-0 kubenswrapper[38936]: I0216 21:38:47.563768 38936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:38:47.592162 master-0 kubenswrapper[38936]: I0216 21:38:47.592073 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:47.592162 master-0 kubenswrapper[38936]: I0216 21:38:47.592129 38936 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:47.592162 master-0 kubenswrapper[38936]: I0216 21:38:47.592142 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:47.597403 master-0 kubenswrapper[38936]: I0216 21:38:47.597292 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "96c52859-2457-4148-b87b-c6d552a3be73" (UID: "96c52859-2457-4148-b87b-c6d552a3be73"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:47.615955 master-0 kubenswrapper[38936]: I0216 21:38:47.615918 38936 scope.go:117] "RemoveContainer" containerID="7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62" Feb 16 21:38:47.695858 master-0 kubenswrapper[38936]: I0216 21:38:47.695747 38936 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96c52859-2457-4148-b87b-c6d552a3be73-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:47.699023 master-0 kubenswrapper[38936]: I0216 21:38:47.698715 38936 scope.go:117] "RemoveContainer" containerID="9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733" Feb 16 21:38:47.701443 master-0 kubenswrapper[38936]: E0216 21:38:47.700926 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733\": container with ID starting with 9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733 not found: ID does not exist" containerID="9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733" Feb 16 21:38:47.701443 master-0 kubenswrapper[38936]: I0216 21:38:47.700964 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733"} err="failed to get container status \"9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733\": rpc error: code = NotFound desc = could not find container \"9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733\": container with ID starting with 9070f314e0da4d022890cb82f8d1df443922d28ef16530da5e5169cf658dd733 not found: ID does not exist" Feb 16 21:38:47.701443 master-0 kubenswrapper[38936]: I0216 21:38:47.700986 38936 scope.go:117] "RemoveContainer" containerID="7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62" Feb 16 21:38:47.702733 master-0 kubenswrapper[38936]: E0216 21:38:47.702516 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62\": container with ID starting with 7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62 not found: ID does not exist" containerID="7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62" Feb 16 21:38:47.702733 master-0 kubenswrapper[38936]: I0216 21:38:47.702544 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62"} err="failed to get container status \"7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62\": rpc error: code = NotFound desc = could not find container \"7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62\": container with ID starting with 7bc450cd524acaa39955df9cfd366067ef7a1e147fc6658d37a1be50b8972e62 not found: ID does not exist" Feb 16 21:38:48.082770 master-0 kubenswrapper[38936]: I0216 21:38:48.082701 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-1d7ec-default-external-api-0" Feb 16 21:38:48.110846 master-0 kubenswrapper[38936]: I0216 21:38:48.110590 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7768cbd466-2k4r9"] Feb 16 21:38:48.466531 master-0 kubenswrapper[38936]: I0216 21:38:48.466314 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7768cbd466-2k4r9"] Feb 16 21:38:48.902711 master-0 kubenswrapper[38936]: I0216 21:38:48.899167 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:48.902711 master-0 kubenswrapper[38936]: I0216 21:38:48.899374 38936 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 21:38:48.924587 master-0 kubenswrapper[38936]: I0216 21:38:48.924499 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-1d7ec-default-internal-api-0" Feb 16 21:38:49.901577 master-0 kubenswrapper[38936]: I0216 21:38:49.901503 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c52859-2457-4148-b87b-c6d552a3be73" path="/var/lib/kubelet/pods/96c52859-2457-4148-b87b-c6d552a3be73/volumes" Feb 16 21:38:50.088917 master-0 kubenswrapper[38936]: I0216 21:38:50.088851 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:38:50.797234 master-0 kubenswrapper[38936]: I0216 21:38:50.796806 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-596cdf67df-snjb9"] Feb 16 21:38:50.797234 master-0 kubenswrapper[38936]: I0216 21:38:50.797049 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" podUID="3182998b-e6c3-4733-a374-23e11d68c55a" containerName="dnsmasq-dns" containerID="cri-o://7170903cc1ded40ad20c722094c49391b2588dcbb4e36a259a46a5ad4dd802de" gracePeriod=10 Feb 16 21:38:51.627074 master-0 kubenswrapper[38936]: I0216 21:38:51.627019 38936 generic.go:334] "Generic (PLEG): container finished" podID="3182998b-e6c3-4733-a374-23e11d68c55a" containerID="7170903cc1ded40ad20c722094c49391b2588dcbb4e36a259a46a5ad4dd802de" exitCode=0 Feb 16 21:38:51.628174 master-0 kubenswrapper[38936]: I0216 21:38:51.627082 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" event={"ID":"3182998b-e6c3-4733-a374-23e11d68c55a","Type":"ContainerDied","Data":"7170903cc1ded40ad20c722094c49391b2588dcbb4e36a259a46a5ad4dd802de"} Feb 16 21:38:54.850485 master-0 kubenswrapper[38936]: I0216 21:38:54.850413 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" podUID="3182998b-e6c3-4733-a374-23e11d68c55a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.232:5353: connect: connection refused" Feb 16 21:38:56.051840 master-0 kubenswrapper[38936]: I0216 21:38:56.051786 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 16 21:38:56.233558 master-0 kubenswrapper[38936]: I0216 21:38:56.233513 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:56.277165 master-0 kubenswrapper[38936]: I0216 21:38:56.277104 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk65f\" (UniqueName: \"kubernetes.io/projected/3182998b-e6c3-4733-a374-23e11d68c55a-kube-api-access-xk65f\") pod \"3182998b-e6c3-4733-a374-23e11d68c55a\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " Feb 16 21:38:56.277165 master-0 kubenswrapper[38936]: I0216 21:38:56.277169 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-config\") pod \"3182998b-e6c3-4733-a374-23e11d68c55a\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " Feb 16 21:38:56.277423 master-0 kubenswrapper[38936]: I0216 21:38:56.277353 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-swift-storage-0\") pod \"3182998b-e6c3-4733-a374-23e11d68c55a\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " Feb 16 21:38:56.277423 master-0 kubenswrapper[38936]: I0216 21:38:56.277404 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-svc\") pod \"3182998b-e6c3-4733-a374-23e11d68c55a\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " Feb 16 21:38:56.281130 master-0 kubenswrapper[38936]: I0216 21:38:56.281033 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-nb\") pod \"3182998b-e6c3-4733-a374-23e11d68c55a\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " Feb 16 21:38:56.281238 master-0 kubenswrapper[38936]: I0216 21:38:56.281203 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-sb\") pod \"3182998b-e6c3-4733-a374-23e11d68c55a\" (UID: \"3182998b-e6c3-4733-a374-23e11d68c55a\") " Feb 16 21:38:56.289248 master-0 kubenswrapper[38936]: I0216 21:38:56.288639 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3182998b-e6c3-4733-a374-23e11d68c55a-kube-api-access-xk65f" (OuterVolumeSpecName: "kube-api-access-xk65f") pod "3182998b-e6c3-4733-a374-23e11d68c55a" (UID: "3182998b-e6c3-4733-a374-23e11d68c55a"). InnerVolumeSpecName "kube-api-access-xk65f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:56.382804 master-0 kubenswrapper[38936]: I0216 21:38:56.382696 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3182998b-e6c3-4733-a374-23e11d68c55a" (UID: "3182998b-e6c3-4733-a374-23e11d68c55a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:56.385710 master-0 kubenswrapper[38936]: I0216 21:38:56.385672 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:56.385797 master-0 kubenswrapper[38936]: I0216 21:38:56.385719 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk65f\" (UniqueName: \"kubernetes.io/projected/3182998b-e6c3-4733-a374-23e11d68c55a-kube-api-access-xk65f\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:56.394587 master-0 kubenswrapper[38936]: I0216 21:38:56.394506 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3182998b-e6c3-4733-a374-23e11d68c55a" (UID: "3182998b-e6c3-4733-a374-23e11d68c55a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:56.398413 master-0 kubenswrapper[38936]: I0216 21:38:56.398360 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-config" (OuterVolumeSpecName: "config") pod "3182998b-e6c3-4733-a374-23e11d68c55a" (UID: "3182998b-e6c3-4733-a374-23e11d68c55a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:56.401162 master-0 kubenswrapper[38936]: I0216 21:38:56.401123 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3182998b-e6c3-4733-a374-23e11d68c55a" (UID: "3182998b-e6c3-4733-a374-23e11d68c55a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:56.441382 master-0 kubenswrapper[38936]: I0216 21:38:56.441310 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3182998b-e6c3-4733-a374-23e11d68c55a" (UID: "3182998b-e6c3-4733-a374-23e11d68c55a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:38:56.488205 master-0 kubenswrapper[38936]: I0216 21:38:56.488168 38936 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:56.488314 master-0 kubenswrapper[38936]: I0216 21:38:56.488303 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:56.488386 master-0 kubenswrapper[38936]: I0216 21:38:56.488376 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:56.488451 master-0 kubenswrapper[38936]: I0216 21:38:56.488441 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3182998b-e6c3-4733-a374-23e11d68c55a-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:56.717812 master-0 kubenswrapper[38936]: I0216 21:38:56.717707 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061","Type":"ContainerStarted","Data":"95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33"} Feb 16 21:38:56.718086 master-0 kubenswrapper[38936]: I0216 21:38:56.718011 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" containerName="inspector-pxe-init" containerID="cri-o://95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33" gracePeriod=60 Feb 16 21:38:56.724586 master-0 kubenswrapper[38936]: I0216 21:38:56.724501 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"37c815ef-1c3d-4b2a-b748-de04b8c4412c","Type":"ContainerStarted","Data":"50a94c27c885aa35c9bd973e857a3c4de6c450ddd454bcb361b38f85d44c5553"} Feb 16 21:38:56.728545 master-0 kubenswrapper[38936]: I0216 21:38:56.728491 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" event={"ID":"3182998b-e6c3-4733-a374-23e11d68c55a","Type":"ContainerDied","Data":"70ffcc4a920e4f645cbe653881a0ac7da4a574d5649d6697114d5373a8762102"} Feb 16 21:38:56.728699 master-0 kubenswrapper[38936]: I0216 21:38:56.728574 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596cdf67df-snjb9" Feb 16 21:38:56.728699 master-0 kubenswrapper[38936]: I0216 21:38:56.728676 38936 scope.go:117] "RemoveContainer" containerID="7170903cc1ded40ad20c722094c49391b2588dcbb4e36a259a46a5ad4dd802de" Feb 16 21:38:56.731504 master-0 kubenswrapper[38936]: I0216 21:38:56.731452 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jjlmc" event={"ID":"8e0cbb0a-133a-421f-9c54-a473c5446028","Type":"ContainerStarted","Data":"bdde91efde1aac6af16fd60c7779ae5510c955ab1d3e2b9db89dbd4851607e51"} Feb 16 21:38:56.765452 master-0 kubenswrapper[38936]: I0216 21:38:56.765411 38936 scope.go:117] "RemoveContainer" containerID="807885708c9d21aa88b6175c7663291a4b386500b4d34f938b664a8823312a2f" Feb 16 21:38:56.875756 master-0 kubenswrapper[38936]: I0216 21:38:56.875632 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-596cdf67df-snjb9"] Feb 16 21:38:56.891581 master-0 kubenswrapper[38936]: I0216 21:38:56.891511 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-596cdf67df-snjb9"] Feb 16 21:38:56.896764 master-0 kubenswrapper[38936]: I0216 21:38:56.896676 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-jjlmc" podStartSLOduration=3.099151431 podStartE2EDuration="15.896635271s" podCreationTimestamp="2026-02-16 21:38:41 +0000 UTC" firstStartedPulling="2026-02-16 21:38:43.274515025 +0000 UTC m=+953.626518377" lastFinishedPulling="2026-02-16 21:38:56.071998855 +0000 UTC m=+966.424002217" observedRunningTime="2026-02-16 21:38:56.865919521 +0000 UTC m=+967.217922883" watchObservedRunningTime="2026-02-16 21:38:56.896635271 +0000 UTC m=+967.248638633" Feb 16 21:38:57.642749 master-0 kubenswrapper[38936]: I0216 21:38:57.642002 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 21:38:57.723607 master-0 kubenswrapper[38936]: I0216 21:38:57.723535 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-etc-podinfo\") pod \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " Feb 16 21:38:57.723878 master-0 kubenswrapper[38936]: I0216 21:38:57.723767 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pz6m\" (UniqueName: \"kubernetes.io/projected/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-kube-api-access-2pz6m\") pod \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " Feb 16 21:38:57.723878 master-0 kubenswrapper[38936]: I0216 21:38:57.723827 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " Feb 16 21:38:57.723878 master-0 kubenswrapper[38936]: I0216 21:38:57.723861 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-config\") pod \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " Feb 16 21:38:57.724169 master-0 kubenswrapper[38936]: I0216 21:38:57.724114 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-combined-ca-bundle\") pod \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " Feb 16 21:38:57.724247 master-0 kubenswrapper[38936]: I0216 21:38:57.724222 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-scripts\") pod \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " Feb 16 21:38:57.724313 master-0 kubenswrapper[38936]: I0216 21:38:57.724269 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic\") pod \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\" (UID: \"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061\") " Feb 16 21:38:57.730671 master-0 kubenswrapper[38936]: I0216 21:38:57.728615 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" (UID: "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:57.750378 master-0 kubenswrapper[38936]: I0216 21:38:57.733850 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-scripts" (OuterVolumeSpecName: "scripts") pod "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" (UID: "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:57.750378 master-0 kubenswrapper[38936]: I0216 21:38:57.733833 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-config" (OuterVolumeSpecName: "config") pod "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" (UID: "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:57.750378 master-0 kubenswrapper[38936]: I0216 21:38:57.734007 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-kube-api-access-2pz6m" (OuterVolumeSpecName: "kube-api-access-2pz6m") pod "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" (UID: "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061"). InnerVolumeSpecName "kube-api-access-2pz6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:38:57.750378 master-0 kubenswrapper[38936]: I0216 21:38:57.734032 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" (UID: "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 21:38:57.750378 master-0 kubenswrapper[38936]: I0216 21:38:57.738116 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" (UID: "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:38:57.750378 master-0 kubenswrapper[38936]: I0216 21:38:57.750051 38936 generic.go:334] "Generic (PLEG): container finished" podID="f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" containerID="95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33" exitCode=0 Feb 16 21:38:57.750378 master-0 kubenswrapper[38936]: I0216 21:38:57.750243 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 21:38:57.751543 master-0 kubenswrapper[38936]: I0216 21:38:57.751493 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061","Type":"ContainerDied","Data":"95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33"} Feb 16 21:38:57.751594 master-0 kubenswrapper[38936]: I0216 21:38:57.751550 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f69d2dc9-965e-4fdf-a2dc-d082e1e5a061","Type":"ContainerDied","Data":"16a5430bbf253d4e5728df557b17edc3381fcb3befcc1e29c950ccbfc001c8f9"} Feb 16 21:38:57.751594 master-0 kubenswrapper[38936]: I0216 21:38:57.751570 38936 scope.go:117] "RemoveContainer" containerID="95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33" Feb 16 21:38:57.802217 master-0 kubenswrapper[38936]: I0216 21:38:57.802135 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" (UID: "f69d2dc9-965e-4fdf-a2dc-d082e1e5a061"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:38:57.860765 master-0 kubenswrapper[38936]: I0216 21:38:57.860702 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:57.860765 master-0 kubenswrapper[38936]: I0216 21:38:57.860756 38936 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:57.860765 master-0 kubenswrapper[38936]: I0216 21:38:57.860771 38936 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:57.861048 master-0 kubenswrapper[38936]: I0216 21:38:57.860785 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pz6m\" (UniqueName: \"kubernetes.io/projected/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-kube-api-access-2pz6m\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:57.861048 master-0 kubenswrapper[38936]: I0216 21:38:57.860797 38936 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:57.861048 master-0 kubenswrapper[38936]: I0216 21:38:57.860809 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:57.861048 master-0 kubenswrapper[38936]: I0216 21:38:57.860820 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:38:57.891131 master-0 kubenswrapper[38936]: I0216 21:38:57.891089 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3182998b-e6c3-4733-a374-23e11d68c55a" path="/var/lib/kubelet/pods/3182998b-e6c3-4733-a374-23e11d68c55a/volumes" Feb 16 21:38:57.931330 master-0 kubenswrapper[38936]: I0216 21:38:57.931273 38936 scope.go:117] "RemoveContainer" containerID="d7b2b4eda9d86c039991dc874f39e15866d16f444169027fbcfb82b6d07138d0" Feb 16 21:38:57.984809 master-0 kubenswrapper[38936]: I0216 21:38:57.984757 38936 scope.go:117] "RemoveContainer" containerID="95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33" Feb 16 21:38:57.985801 master-0 kubenswrapper[38936]: E0216 21:38:57.985750 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33\": container with ID starting with 95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33 not found: ID does not exist" containerID="95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33" Feb 16 21:38:57.985984 master-0 kubenswrapper[38936]: I0216 21:38:57.985942 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33"} err="failed to get container status \"95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33\": rpc error: code = NotFound desc = could not find container \"95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33\": container with ID starting with 95d99901df9f8a5ae6ffb65419e31978f13be32f70c5836afb8ad57d501efa33 not found: ID does not exist" Feb 16 21:38:57.986100 master-0 kubenswrapper[38936]: I0216 21:38:57.986087 38936 scope.go:117] "RemoveContainer" containerID="d7b2b4eda9d86c039991dc874f39e15866d16f444169027fbcfb82b6d07138d0" Feb 16 21:38:57.987781 master-0 kubenswrapper[38936]: E0216 21:38:57.987752 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b2b4eda9d86c039991dc874f39e15866d16f444169027fbcfb82b6d07138d0\": container with ID starting with d7b2b4eda9d86c039991dc874f39e15866d16f444169027fbcfb82b6d07138d0 not found: ID does not exist" containerID="d7b2b4eda9d86c039991dc874f39e15866d16f444169027fbcfb82b6d07138d0" Feb 16 21:38:57.987864 master-0 kubenswrapper[38936]: I0216 21:38:57.987792 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b2b4eda9d86c039991dc874f39e15866d16f444169027fbcfb82b6d07138d0"} err="failed to get container status \"d7b2b4eda9d86c039991dc874f39e15866d16f444169027fbcfb82b6d07138d0\": rpc error: code = NotFound desc = could not find container \"d7b2b4eda9d86c039991dc874f39e15866d16f444169027fbcfb82b6d07138d0\": container with ID starting with d7b2b4eda9d86c039991dc874f39e15866d16f444169027fbcfb82b6d07138d0 not found: ID does not exist" Feb 16 21:38:58.148091 master-0 kubenswrapper[38936]: I0216 21:38:58.147941 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 21:38:58.167896 master-0 kubenswrapper[38936]: I0216 21:38:58.167822 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 21:38:58.202315 master-0 kubenswrapper[38936]: I0216 21:38:58.202237 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 21:38:58.202888 master-0 kubenswrapper[38936]: E0216 21:38:58.202863 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" containerName="ironic-python-agent-init" Feb 16 21:38:58.202978 master-0 kubenswrapper[38936]: I0216 21:38:58.202890 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" containerName="ironic-python-agent-init" Feb 16 21:38:58.202978 master-0 kubenswrapper[38936]: E0216 21:38:58.202933 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3182998b-e6c3-4733-a374-23e11d68c55a" containerName="dnsmasq-dns" Feb 16 21:38:58.202978 master-0 kubenswrapper[38936]: I0216 21:38:58.202945 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="3182998b-e6c3-4733-a374-23e11d68c55a" containerName="dnsmasq-dns" Feb 16 21:38:58.202978 master-0 kubenswrapper[38936]: E0216 21:38:58.202972 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" containerName="inspector-pxe-init" Feb 16 21:38:58.202978 master-0 kubenswrapper[38936]: I0216 21:38:58.202980 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" containerName="inspector-pxe-init" Feb 16 21:38:58.203227 master-0 kubenswrapper[38936]: E0216 21:38:58.202999 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c52859-2457-4148-b87b-c6d552a3be73" containerName="placement-log" Feb 16 21:38:58.203227 master-0 kubenswrapper[38936]: I0216 21:38:58.203008 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c52859-2457-4148-b87b-c6d552a3be73" containerName="placement-log" Feb 16 21:38:58.203227 master-0 kubenswrapper[38936]: E0216 21:38:58.203035 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c52859-2457-4148-b87b-c6d552a3be73" containerName="placement-api" Feb 16 21:38:58.203227 master-0 kubenswrapper[38936]: I0216 21:38:58.203043 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c52859-2457-4148-b87b-c6d552a3be73" containerName="placement-api" Feb 16 21:38:58.203227 master-0 kubenswrapper[38936]: E0216 21:38:58.203065 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3182998b-e6c3-4733-a374-23e11d68c55a" containerName="init" Feb 16 21:38:58.203227 master-0 kubenswrapper[38936]: I0216 21:38:58.203074 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="3182998b-e6c3-4733-a374-23e11d68c55a" containerName="init" Feb 16 21:38:58.203466 master-0 kubenswrapper[38936]: I0216 21:38:58.203443 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" containerName="inspector-pxe-init" Feb 16 21:38:58.203512 master-0 kubenswrapper[38936]: I0216 21:38:58.203489 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c52859-2457-4148-b87b-c6d552a3be73" containerName="placement-log" Feb 16 21:38:58.203512 master-0 kubenswrapper[38936]: I0216 21:38:58.203505 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="3182998b-e6c3-4733-a374-23e11d68c55a" containerName="dnsmasq-dns" Feb 16 21:38:58.203592 master-0 kubenswrapper[38936]: I0216 21:38:58.203517 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c52859-2457-4148-b87b-c6d552a3be73" containerName="placement-api" Feb 16 21:38:58.209741 master-0 kubenswrapper[38936]: I0216 21:38:58.207936 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 21:38:58.214896 master-0 kubenswrapper[38936]: I0216 21:38:58.214642 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 16 21:38:58.215095 master-0 kubenswrapper[38936]: I0216 21:38:58.215008 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Feb 16 21:38:58.215211 master-0 kubenswrapper[38936]: I0216 21:38:58.215188 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 16 21:38:58.215373 master-0 kubenswrapper[38936]: I0216 21:38:58.215300 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Feb 16 21:38:58.215429 master-0 kubenswrapper[38936]: I0216 21:38:58.215411 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 16 21:38:58.217493 master-0 kubenswrapper[38936]: I0216 21:38:58.217422 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 21:38:58.277679 master-0 kubenswrapper[38936]: I0216 21:38:58.272306 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.277679 master-0 kubenswrapper[38936]: I0216 21:38:58.272378 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.277679 master-0 kubenswrapper[38936]: I0216 21:38:58.272839 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4dlh\" (UniqueName: \"kubernetes.io/projected/1a06f554-76df-4a75-acbb-455f96f20dd5-kube-api-access-p4dlh\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.277679 master-0 kubenswrapper[38936]: I0216 21:38:58.272946 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1a06f554-76df-4a75-acbb-455f96f20dd5-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.277679 master-0 kubenswrapper[38936]: I0216 21:38:58.273022 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1a06f554-76df-4a75-acbb-455f96f20dd5-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.277679 master-0 kubenswrapper[38936]: I0216 21:38:58.273243 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.277679 master-0 kubenswrapper[38936]: I0216 21:38:58.273307 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-config\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.277679 master-0 kubenswrapper[38936]: I0216 21:38:58.273352 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1a06f554-76df-4a75-acbb-455f96f20dd5-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.277679 master-0 kubenswrapper[38936]: I0216 21:38:58.273492 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-scripts\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.375551 master-0 kubenswrapper[38936]: I0216 21:38:58.375485 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4dlh\" (UniqueName: \"kubernetes.io/projected/1a06f554-76df-4a75-acbb-455f96f20dd5-kube-api-access-p4dlh\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.375809 master-0 kubenswrapper[38936]: I0216 21:38:58.375569 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1a06f554-76df-4a75-acbb-455f96f20dd5-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.375809 master-0 kubenswrapper[38936]: I0216 21:38:58.375623 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1a06f554-76df-4a75-acbb-455f96f20dd5-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.375809 master-0 kubenswrapper[38936]: I0216 21:38:58.375678 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.375809 master-0 kubenswrapper[38936]: I0216 21:38:58.375719 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-config\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.375809 master-0 kubenswrapper[38936]: I0216 21:38:58.375753 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1a06f554-76df-4a75-acbb-455f96f20dd5-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.375809 master-0 kubenswrapper[38936]: I0216 21:38:58.375805 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-scripts\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.376081 master-0 kubenswrapper[38936]: I0216 21:38:58.375837 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.376081 master-0 kubenswrapper[38936]: I0216 21:38:58.375853 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.376184 master-0 kubenswrapper[38936]: I0216 21:38:58.376145 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/1a06f554-76df-4a75-acbb-455f96f20dd5-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.376799 master-0 kubenswrapper[38936]: I0216 21:38:58.376756 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/1a06f554-76df-4a75-acbb-455f96f20dd5-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.379562 master-0 kubenswrapper[38936]: I0216 21:38:58.379514 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.379823 master-0 kubenswrapper[38936]: I0216 21:38:58.379782 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.379823 master-0 kubenswrapper[38936]: I0216 21:38:58.379805 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-scripts\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.379926 master-0 kubenswrapper[38936]: I0216 21:38:58.379805 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/1a06f554-76df-4a75-acbb-455f96f20dd5-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.380241 master-0 kubenswrapper[38936]: I0216 21:38:58.380207 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.383337 master-0 kubenswrapper[38936]: I0216 21:38:58.383302 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a06f554-76df-4a75-acbb-455f96f20dd5-config\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.394073 master-0 kubenswrapper[38936]: I0216 21:38:58.394029 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4dlh\" (UniqueName: \"kubernetes.io/projected/1a06f554-76df-4a75-acbb-455f96f20dd5-kube-api-access-p4dlh\") pod \"ironic-inspector-0\" (UID: \"1a06f554-76df-4a75-acbb-455f96f20dd5\") " pod="openstack/ironic-inspector-0" Feb 16 21:38:58.544857 master-0 kubenswrapper[38936]: I0216 21:38:58.544773 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 21:38:59.202701 master-0 kubenswrapper[38936]: I0216 21:38:59.202619 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 21:38:59.793916 master-0 kubenswrapper[38936]: I0216 21:38:59.793855 38936 generic.go:334] "Generic (PLEG): container finished" podID="1a06f554-76df-4a75-acbb-455f96f20dd5" containerID="9417ea2008abaa8052f28f2a0becc69ebcba8b7706edfafc21eed910ad57c2f7" exitCode=0 Feb 16 21:38:59.794137 master-0 kubenswrapper[38936]: I0216 21:38:59.793934 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"1a06f554-76df-4a75-acbb-455f96f20dd5","Type":"ContainerDied","Data":"9417ea2008abaa8052f28f2a0becc69ebcba8b7706edfafc21eed910ad57c2f7"} Feb 16 21:38:59.794137 master-0 kubenswrapper[38936]: I0216 21:38:59.793966 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"1a06f554-76df-4a75-acbb-455f96f20dd5","Type":"ContainerStarted","Data":"e3da131fcd51a86d0f4892848f3d3182fc04b8ce1564332e06a41ef3d2cd58dc"} Feb 16 21:38:59.921859 master-0 kubenswrapper[38936]: I0216 21:38:59.921796 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f69d2dc9-965e-4fdf-a2dc-d082e1e5a061" path="/var/lib/kubelet/pods/f69d2dc9-965e-4fdf-a2dc-d082e1e5a061/volumes" Feb 16 21:39:00.072094 master-0 kubenswrapper[38936]: I0216 21:39:00.071940 38936 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod185cbfbd-402e-4012-9c97-0a8f3a579e74"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod185cbfbd-402e-4012-9c97-0a8f3a579e74] : Timed out while waiting for systemd to remove kubepods-besteffort-pod185cbfbd_402e_4012_9c97_0a8f3a579e74.slice" Feb 16 21:39:00.806554 master-0 kubenswrapper[38936]: I0216 21:39:00.806487 38936 generic.go:334] "Generic (PLEG): container finished" podID="1a06f554-76df-4a75-acbb-455f96f20dd5" containerID="75e41a45671aedd2a3fda9fd1be58da1954bfcf65e21540c54bb37bc0fab6889" exitCode=0 Feb 16 21:39:00.806554 master-0 kubenswrapper[38936]: I0216 21:39:00.806554 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"1a06f554-76df-4a75-acbb-455f96f20dd5","Type":"ContainerDied","Data":"75e41a45671aedd2a3fda9fd1be58da1954bfcf65e21540c54bb37bc0fab6889"} Feb 16 21:39:01.820552 master-0 kubenswrapper[38936]: I0216 21:39:01.820259 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"1a06f554-76df-4a75-acbb-455f96f20dd5","Type":"ContainerStarted","Data":"31624668985a0779cd0b5735224d44138fb630b621ccb28948cd1be6e4c94388"} Feb 16 21:39:02.851702 master-0 kubenswrapper[38936]: I0216 21:39:02.850257 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"1a06f554-76df-4a75-acbb-455f96f20dd5","Type":"ContainerStarted","Data":"04b7b1d2ad877e4dd1ef85e24dfabe7656980496a700f61f55be3e27dc94d80e"} Feb 16 21:39:02.851702 master-0 kubenswrapper[38936]: I0216 21:39:02.850318 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"1a06f554-76df-4a75-acbb-455f96f20dd5","Type":"ContainerStarted","Data":"b87b5d6bba60730766250a71bc7dabc105d5cb4913e3f2937ea8073b618fa1d1"} Feb 16 21:39:03.868161 master-0 kubenswrapper[38936]: I0216 21:39:03.868027 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"1a06f554-76df-4a75-acbb-455f96f20dd5","Type":"ContainerStarted","Data":"9ec150133f99c18f98e136f934ab1f2bc0afd35370b993692a43f6553339dbce"} Feb 16 21:39:03.868161 master-0 kubenswrapper[38936]: I0216 21:39:03.868080 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"1a06f554-76df-4a75-acbb-455f96f20dd5","Type":"ContainerStarted","Data":"932f5ccc8ddd14747bcc5cc19f418306ec15fe436fcb82e570c17f83d0506dd6"} Feb 16 21:39:04.885535 master-0 kubenswrapper[38936]: I0216 21:39:04.884413 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 21:39:05.211281 master-0 kubenswrapper[38936]: I0216 21:39:05.211102 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=7.21108539 podStartE2EDuration="7.21108539s" podCreationTimestamp="2026-02-16 21:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:05.20882577 +0000 UTC m=+975.560829152" watchObservedRunningTime="2026-02-16 21:39:05.21108539 +0000 UTC m=+975.563088752" Feb 16 21:39:05.893774 master-0 kubenswrapper[38936]: I0216 21:39:05.893728 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 21:39:06.962081 master-0 kubenswrapper[38936]: I0216 21:39:06.962007 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 21:39:07.929985 master-0 kubenswrapper[38936]: I0216 21:39:07.929881 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 21:39:08.545872 master-0 kubenswrapper[38936]: I0216 21:39:08.545792 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 16 21:39:08.545872 master-0 kubenswrapper[38936]: I0216 21:39:08.545870 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 21:39:08.545872 master-0 kubenswrapper[38936]: I0216 21:39:08.545880 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 16 21:39:08.545872 master-0 kubenswrapper[38936]: I0216 21:39:08.545889 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 21:39:08.572364 master-0 kubenswrapper[38936]: I0216 21:39:08.572287 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 16 21:39:08.578510 master-0 kubenswrapper[38936]: I0216 21:39:08.578448 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 16 21:39:08.942369 master-0 kubenswrapper[38936]: I0216 21:39:08.942200 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 21:39:08.947543 master-0 kubenswrapper[38936]: I0216 21:39:08.947462 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 21:39:13.005432 master-0 kubenswrapper[38936]: I0216 21:39:13.005346 38936 generic.go:334] "Generic (PLEG): container finished" podID="8e0cbb0a-133a-421f-9c54-a473c5446028" containerID="bdde91efde1aac6af16fd60c7779ae5510c955ab1d3e2b9db89dbd4851607e51" exitCode=0 Feb 16 21:39:13.006145 master-0 kubenswrapper[38936]: I0216 21:39:13.005443 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jjlmc" event={"ID":"8e0cbb0a-133a-421f-9c54-a473c5446028","Type":"ContainerDied","Data":"bdde91efde1aac6af16fd60c7779ae5510c955ab1d3e2b9db89dbd4851607e51"} Feb 16 21:39:14.484872 master-0 kubenswrapper[38936]: I0216 21:39:14.484813 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:39:14.541636 master-0 kubenswrapper[38936]: I0216 21:39:14.541567 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-combined-ca-bundle\") pod \"8e0cbb0a-133a-421f-9c54-a473c5446028\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " Feb 16 21:39:14.541892 master-0 kubenswrapper[38936]: I0216 21:39:14.541671 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-config-data\") pod \"8e0cbb0a-133a-421f-9c54-a473c5446028\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " Feb 16 21:39:14.541966 master-0 kubenswrapper[38936]: I0216 21:39:14.541889 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-scripts\") pod \"8e0cbb0a-133a-421f-9c54-a473c5446028\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " Feb 16 21:39:14.541966 master-0 kubenswrapper[38936]: I0216 21:39:14.541946 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsg8c\" (UniqueName: \"kubernetes.io/projected/8e0cbb0a-133a-421f-9c54-a473c5446028-kube-api-access-hsg8c\") pod \"8e0cbb0a-133a-421f-9c54-a473c5446028\" (UID: \"8e0cbb0a-133a-421f-9c54-a473c5446028\") " Feb 16 21:39:14.550739 master-0 kubenswrapper[38936]: I0216 21:39:14.546264 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-scripts" (OuterVolumeSpecName: "scripts") pod "8e0cbb0a-133a-421f-9c54-a473c5446028" (UID: "8e0cbb0a-133a-421f-9c54-a473c5446028"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:14.550739 master-0 kubenswrapper[38936]: I0216 21:39:14.549084 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e0cbb0a-133a-421f-9c54-a473c5446028-kube-api-access-hsg8c" (OuterVolumeSpecName: "kube-api-access-hsg8c") pod "8e0cbb0a-133a-421f-9c54-a473c5446028" (UID: "8e0cbb0a-133a-421f-9c54-a473c5446028"). InnerVolumeSpecName "kube-api-access-hsg8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:39:14.581392 master-0 kubenswrapper[38936]: I0216 21:39:14.581322 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-config-data" (OuterVolumeSpecName: "config-data") pod "8e0cbb0a-133a-421f-9c54-a473c5446028" (UID: "8e0cbb0a-133a-421f-9c54-a473c5446028"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:14.588834 master-0 kubenswrapper[38936]: I0216 21:39:14.588739 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e0cbb0a-133a-421f-9c54-a473c5446028" (UID: "8e0cbb0a-133a-421f-9c54-a473c5446028"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:14.645475 master-0 kubenswrapper[38936]: I0216 21:39:14.645410 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsg8c\" (UniqueName: \"kubernetes.io/projected/8e0cbb0a-133a-421f-9c54-a473c5446028-kube-api-access-hsg8c\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:14.645600 master-0 kubenswrapper[38936]: I0216 21:39:14.645485 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:14.645600 master-0 kubenswrapper[38936]: I0216 21:39:14.645504 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:14.645600 master-0 kubenswrapper[38936]: I0216 21:39:14.645516 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e0cbb0a-133a-421f-9c54-a473c5446028-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:15.052487 master-0 kubenswrapper[38936]: I0216 21:39:15.052089 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jjlmc" event={"ID":"8e0cbb0a-133a-421f-9c54-a473c5446028","Type":"ContainerDied","Data":"f246f2b14e075b6e188b3053a7350db8c7a55510179efd0d610c669613d8e0c4"} Feb 16 21:39:15.052487 master-0 kubenswrapper[38936]: I0216 21:39:15.052150 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f246f2b14e075b6e188b3053a7350db8c7a55510179efd0d610c669613d8e0c4" Feb 16 21:39:15.052487 master-0 kubenswrapper[38936]: I0216 21:39:15.052220 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jjlmc" Feb 16 21:39:15.214634 master-0 kubenswrapper[38936]: I0216 21:39:15.213916 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 21:39:15.214634 master-0 kubenswrapper[38936]: E0216 21:39:15.214598 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0cbb0a-133a-421f-9c54-a473c5446028" containerName="nova-cell0-conductor-db-sync" Feb 16 21:39:15.214634 master-0 kubenswrapper[38936]: I0216 21:39:15.214615 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0cbb0a-133a-421f-9c54-a473c5446028" containerName="nova-cell0-conductor-db-sync" Feb 16 21:39:15.215025 master-0 kubenswrapper[38936]: I0216 21:39:15.214934 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e0cbb0a-133a-421f-9c54-a473c5446028" containerName="nova-cell0-conductor-db-sync" Feb 16 21:39:15.216297 master-0 kubenswrapper[38936]: I0216 21:39:15.216203 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:15.219010 master-0 kubenswrapper[38936]: I0216 21:39:15.218961 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 21:39:15.229148 master-0 kubenswrapper[38936]: I0216 21:39:15.229084 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 21:39:15.272312 master-0 kubenswrapper[38936]: I0216 21:39:15.272250 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a550e69d-2bfc-4036-aae4-0139d6edfa94-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a550e69d-2bfc-4036-aae4-0139d6edfa94\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:15.272622 master-0 kubenswrapper[38936]: I0216 21:39:15.272591 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a550e69d-2bfc-4036-aae4-0139d6edfa94-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a550e69d-2bfc-4036-aae4-0139d6edfa94\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:15.273069 master-0 kubenswrapper[38936]: I0216 21:39:15.273037 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzdng\" (UniqueName: \"kubernetes.io/projected/a550e69d-2bfc-4036-aae4-0139d6edfa94-kube-api-access-mzdng\") pod \"nova-cell0-conductor-0\" (UID: \"a550e69d-2bfc-4036-aae4-0139d6edfa94\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:15.376148 master-0 kubenswrapper[38936]: I0216 21:39:15.375966 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzdng\" (UniqueName: \"kubernetes.io/projected/a550e69d-2bfc-4036-aae4-0139d6edfa94-kube-api-access-mzdng\") pod \"nova-cell0-conductor-0\" (UID: \"a550e69d-2bfc-4036-aae4-0139d6edfa94\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:15.376372 master-0 kubenswrapper[38936]: I0216 21:39:15.376348 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a550e69d-2bfc-4036-aae4-0139d6edfa94-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a550e69d-2bfc-4036-aae4-0139d6edfa94\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:15.376899 master-0 kubenswrapper[38936]: I0216 21:39:15.376699 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a550e69d-2bfc-4036-aae4-0139d6edfa94-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a550e69d-2bfc-4036-aae4-0139d6edfa94\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:15.381785 master-0 kubenswrapper[38936]: I0216 21:39:15.381697 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a550e69d-2bfc-4036-aae4-0139d6edfa94-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a550e69d-2bfc-4036-aae4-0139d6edfa94\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:15.383459 master-0 kubenswrapper[38936]: I0216 21:39:15.383420 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a550e69d-2bfc-4036-aae4-0139d6edfa94-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a550e69d-2bfc-4036-aae4-0139d6edfa94\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:15.397140 master-0 kubenswrapper[38936]: I0216 21:39:15.397076 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzdng\" (UniqueName: \"kubernetes.io/projected/a550e69d-2bfc-4036-aae4-0139d6edfa94-kube-api-access-mzdng\") pod \"nova-cell0-conductor-0\" (UID: \"a550e69d-2bfc-4036-aae4-0139d6edfa94\") " pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:15.548591 master-0 kubenswrapper[38936]: I0216 21:39:15.548535 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:15.984209 master-0 kubenswrapper[38936]: I0216 21:39:15.984043 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 21:39:16.000782 master-0 kubenswrapper[38936]: W0216 21:39:16.000718 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda550e69d_2bfc_4036_aae4_0139d6edfa94.slice/crio-d8bc126e9ca2fc8c513785e095d593f9f378e098f6046721535a3e6b52c1757e WatchSource:0}: Error finding container d8bc126e9ca2fc8c513785e095d593f9f378e098f6046721535a3e6b52c1757e: Status 404 returned error can't find the container with id d8bc126e9ca2fc8c513785e095d593f9f378e098f6046721535a3e6b52c1757e Feb 16 21:39:16.067360 master-0 kubenswrapper[38936]: I0216 21:39:16.067271 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a550e69d-2bfc-4036-aae4-0139d6edfa94","Type":"ContainerStarted","Data":"d8bc126e9ca2fc8c513785e095d593f9f378e098f6046721535a3e6b52c1757e"} Feb 16 21:39:17.080971 master-0 kubenswrapper[38936]: I0216 21:39:17.080898 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a550e69d-2bfc-4036-aae4-0139d6edfa94","Type":"ContainerStarted","Data":"df6479d1e3835ae04ba1a0ac275d1bcf633bba1adb5a8bf26b403b844398ac63"} Feb 16 21:39:17.081679 master-0 kubenswrapper[38936]: I0216 21:39:17.081031 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:17.110504 master-0 kubenswrapper[38936]: I0216 21:39:17.110372 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.11034965 podStartE2EDuration="2.11034965s" podCreationTimestamp="2026-02-16 21:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:17.104136983 +0000 UTC m=+987.456140345" watchObservedRunningTime="2026-02-16 21:39:17.11034965 +0000 UTC m=+987.462353022" Feb 16 21:39:25.579282 master-0 kubenswrapper[38936]: I0216 21:39:25.579219 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 21:39:26.108127 master-0 kubenswrapper[38936]: I0216 21:39:26.108055 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-d25bz"] Feb 16 21:39:26.109996 master-0 kubenswrapper[38936]: I0216 21:39:26.109966 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.113753 master-0 kubenswrapper[38936]: I0216 21:39:26.113713 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 21:39:26.113950 master-0 kubenswrapper[38936]: I0216 21:39:26.113913 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 21:39:26.125995 master-0 kubenswrapper[38936]: I0216 21:39:26.125923 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-d25bz"] Feb 16 21:39:26.184835 master-0 kubenswrapper[38936]: I0216 21:39:26.184721 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-config-data\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.186190 master-0 kubenswrapper[38936]: I0216 21:39:26.186123 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89jkh\" (UniqueName: \"kubernetes.io/projected/f26172c1-371c-4d1d-b026-80e4ebe31568-kube-api-access-89jkh\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.187251 master-0 kubenswrapper[38936]: I0216 21:39:26.186610 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.187251 master-0 kubenswrapper[38936]: I0216 21:39:26.186872 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-scripts\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.218138 master-0 kubenswrapper[38936]: I0216 21:39:26.218085 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 16 21:39:26.220385 master-0 kubenswrapper[38936]: I0216 21:39:26.220357 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:26.227075 master-0 kubenswrapper[38936]: I0216 21:39:26.227017 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Feb 16 21:39:26.250534 master-0 kubenswrapper[38936]: I0216 21:39:26.249420 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 16 21:39:26.290901 master-0 kubenswrapper[38936]: I0216 21:39:26.290827 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x6gj\" (UniqueName: \"kubernetes.io/projected/c1759520-fd09-489c-bea6-209d8c1144d2-kube-api-access-8x6gj\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"c1759520-fd09-489c-bea6-209d8c1144d2\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:26.291194 master-0 kubenswrapper[38936]: I0216 21:39:26.290936 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89jkh\" (UniqueName: \"kubernetes.io/projected/f26172c1-371c-4d1d-b026-80e4ebe31568-kube-api-access-89jkh\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.291194 master-0 kubenswrapper[38936]: I0216 21:39:26.291149 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1759520-fd09-489c-bea6-209d8c1144d2-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"c1759520-fd09-489c-bea6-209d8c1144d2\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:26.291827 master-0 kubenswrapper[38936]: I0216 21:39:26.291774 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.291960 master-0 kubenswrapper[38936]: I0216 21:39:26.291935 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-scripts\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.292921 master-0 kubenswrapper[38936]: I0216 21:39:26.292071 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1759520-fd09-489c-bea6-209d8c1144d2-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"c1759520-fd09-489c-bea6-209d8c1144d2\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:26.292921 master-0 kubenswrapper[38936]: I0216 21:39:26.292149 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-config-data\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.298616 master-0 kubenswrapper[38936]: I0216 21:39:26.298576 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-config-data\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.299527 master-0 kubenswrapper[38936]: I0216 21:39:26.299490 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-scripts\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.307239 master-0 kubenswrapper[38936]: I0216 21:39:26.303464 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.328127 master-0 kubenswrapper[38936]: I0216 21:39:26.322348 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89jkh\" (UniqueName: \"kubernetes.io/projected/f26172c1-371c-4d1d-b026-80e4ebe31568-kube-api-access-89jkh\") pod \"nova-cell0-cell-mapping-d25bz\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.419994 master-0 kubenswrapper[38936]: I0216 21:39:26.419909 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1759520-fd09-489c-bea6-209d8c1144d2-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"c1759520-fd09-489c-bea6-209d8c1144d2\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:26.425952 master-0 kubenswrapper[38936]: I0216 21:39:26.420031 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x6gj\" (UniqueName: \"kubernetes.io/projected/c1759520-fd09-489c-bea6-209d8c1144d2-kube-api-access-8x6gj\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"c1759520-fd09-489c-bea6-209d8c1144d2\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:26.425952 master-0 kubenswrapper[38936]: I0216 21:39:26.420078 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1759520-fd09-489c-bea6-209d8c1144d2-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"c1759520-fd09-489c-bea6-209d8c1144d2\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:26.450948 master-0 kubenswrapper[38936]: I0216 21:39:26.443732 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:39:26.450948 master-0 kubenswrapper[38936]: I0216 21:39:26.444667 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1759520-fd09-489c-bea6-209d8c1144d2-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"c1759520-fd09-489c-bea6-209d8c1144d2\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:26.450948 master-0 kubenswrapper[38936]: I0216 21:39:26.447587 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:39:26.453754 master-0 kubenswrapper[38936]: I0216 21:39:26.453633 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:26.458435 master-0 kubenswrapper[38936]: I0216 21:39:26.458371 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:39:26.461253 master-0 kubenswrapper[38936]: I0216 21:39:26.461195 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:39:26.491675 master-0 kubenswrapper[38936]: I0216 21:39:26.489465 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1759520-fd09-489c-bea6-209d8c1144d2-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"c1759520-fd09-489c-bea6-209d8c1144d2\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:26.502778 master-0 kubenswrapper[38936]: I0216 21:39:26.496373 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x6gj\" (UniqueName: \"kubernetes.io/projected/c1759520-fd09-489c-bea6-209d8c1144d2-kube-api-access-8x6gj\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"c1759520-fd09-489c-bea6-209d8c1144d2\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:26.547117 master-0 kubenswrapper[38936]: I0216 21:39:26.543360 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7fxq\" (UniqueName: \"kubernetes.io/projected/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-kube-api-access-x7fxq\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.550158 master-0 kubenswrapper[38936]: I0216 21:39:26.550124 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.550355 master-0 kubenswrapper[38936]: I0216 21:39:26.550325 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-config-data\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.550467 master-0 kubenswrapper[38936]: I0216 21:39:26.550453 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-logs\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.566247 master-0 kubenswrapper[38936]: I0216 21:39:26.566178 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:26.616261 master-0 kubenswrapper[38936]: I0216 21:39:26.615778 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:39:26.618192 master-0 kubenswrapper[38936]: I0216 21:39:26.617996 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:26.640769 master-0 kubenswrapper[38936]: I0216 21:39:26.632110 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 21:39:26.657944 master-0 kubenswrapper[38936]: I0216 21:39:26.657876 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:39:26.662154 master-0 kubenswrapper[38936]: I0216 21:39:26.659970 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.662154 master-0 kubenswrapper[38936]: I0216 21:39:26.660557 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-config-data\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.662154 master-0 kubenswrapper[38936]: I0216 21:39:26.660603 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-logs\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.662154 master-0 kubenswrapper[38936]: I0216 21:39:26.660798 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7fxq\" (UniqueName: \"kubernetes.io/projected/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-kube-api-access-x7fxq\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.662472 master-0 kubenswrapper[38936]: I0216 21:39:26.662200 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-logs\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.669145 master-0 kubenswrapper[38936]: I0216 21:39:26.669087 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.670990 master-0 kubenswrapper[38936]: I0216 21:39:26.670925 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-config-data\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.714672 master-0 kubenswrapper[38936]: I0216 21:39:26.711025 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7fxq\" (UniqueName: \"kubernetes.io/projected/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-kube-api-access-x7fxq\") pod \"nova-api-0\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " pod="openstack/nova-api-0" Feb 16 21:39:26.808279 master-0 kubenswrapper[38936]: I0216 21:39:26.783026 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsbz\" (UniqueName: \"kubernetes.io/projected/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-kube-api-access-tnsbz\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:26.808279 master-0 kubenswrapper[38936]: I0216 21:39:26.783147 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:26.808279 master-0 kubenswrapper[38936]: I0216 21:39:26.783282 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:26.897843 master-0 kubenswrapper[38936]: I0216 21:39:26.887484 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnsbz\" (UniqueName: \"kubernetes.io/projected/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-kube-api-access-tnsbz\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:26.897843 master-0 kubenswrapper[38936]: I0216 21:39:26.887583 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:26.897843 master-0 kubenswrapper[38936]: I0216 21:39:26.887646 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:26.897843 master-0 kubenswrapper[38936]: I0216 21:39:26.895821 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:26.903195 master-0 kubenswrapper[38936]: I0216 21:39:26.901729 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:39:26.903195 master-0 kubenswrapper[38936]: I0216 21:39:26.902562 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:39:26.911315 master-0 kubenswrapper[38936]: I0216 21:39:26.911241 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:26.913848 master-0 kubenswrapper[38936]: I0216 21:39:26.913683 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:39:26.916415 master-0 kubenswrapper[38936]: I0216 21:39:26.916322 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnsbz\" (UniqueName: \"kubernetes.io/projected/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-kube-api-access-tnsbz\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:26.918083 master-0 kubenswrapper[38936]: I0216 21:39:26.918048 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 21:39:26.919811 master-0 kubenswrapper[38936]: I0216 21:39:26.919759 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:26.928201 master-0 kubenswrapper[38936]: I0216 21:39:26.928121 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:39:26.933800 master-0 kubenswrapper[38936]: I0216 21:39:26.933680 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:39:26.934679 master-0 kubenswrapper[38936]: I0216 21:39:26.934585 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:39:26.980565 master-0 kubenswrapper[38936]: I0216 21:39:26.980497 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:27.020392 master-0 kubenswrapper[38936]: I0216 21:39:27.020131 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:27.036753 master-0 kubenswrapper[38936]: I0216 21:39:27.036355 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-846fc68895-n6hmv"] Feb 16 21:39:27.038854 master-0 kubenswrapper[38936]: I0216 21:39:27.038610 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.079223 master-0 kubenswrapper[38936]: I0216 21:39:27.079074 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-846fc68895-n6hmv"] Feb 16 21:39:27.110800 master-0 kubenswrapper[38936]: I0216 21:39:27.107521 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-config-data\") pod \"nova-scheduler-0\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:27.110800 master-0 kubenswrapper[38936]: I0216 21:39:27.107683 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-config-data\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.110800 master-0 kubenswrapper[38936]: I0216 21:39:27.107709 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:27.110800 master-0 kubenswrapper[38936]: I0216 21:39:27.107733 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsddh\" (UniqueName: \"kubernetes.io/projected/c7c9cc07-f5c6-45ae-96e1-9645de742311-kube-api-access-lsddh\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.115308 master-0 kubenswrapper[38936]: I0216 21:39:27.114378 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8jb9\" (UniqueName: \"kubernetes.io/projected/85424a8b-db5b-47c1-8b31-86ebb9f6484f-kube-api-access-r8jb9\") pod \"nova-scheduler-0\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:27.115308 master-0 kubenswrapper[38936]: I0216 21:39:27.114603 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.115308 master-0 kubenswrapper[38936]: I0216 21:39:27.114687 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7c9cc07-f5c6-45ae-96e1-9645de742311-logs\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.218875 master-0 kubenswrapper[38936]: I0216 21:39:27.218780 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsddh\" (UniqueName: \"kubernetes.io/projected/c7c9cc07-f5c6-45ae-96e1-9645de742311-kube-api-access-lsddh\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.218993 master-0 kubenswrapper[38936]: I0216 21:39:27.218886 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-nb\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.219053 master-0 kubenswrapper[38936]: I0216 21:39:27.219025 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8jb9\" (UniqueName: \"kubernetes.io/projected/85424a8b-db5b-47c1-8b31-86ebb9f6484f-kube-api-access-r8jb9\") pod \"nova-scheduler-0\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:27.219295 master-0 kubenswrapper[38936]: I0216 21:39:27.219202 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9ppd\" (UniqueName: \"kubernetes.io/projected/b96444d1-ae55-4560-a3a6-b75072a1271f-kube-api-access-f9ppd\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.221403 master-0 kubenswrapper[38936]: I0216 21:39:27.220720 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.221403 master-0 kubenswrapper[38936]: I0216 21:39:27.220768 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-sb\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.221403 master-0 kubenswrapper[38936]: I0216 21:39:27.220815 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7c9cc07-f5c6-45ae-96e1-9645de742311-logs\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.221403 master-0 kubenswrapper[38936]: I0216 21:39:27.220842 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-swift-storage-0\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.221403 master-0 kubenswrapper[38936]: I0216 21:39:27.221069 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-config-data\") pod \"nova-scheduler-0\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:27.221403 master-0 kubenswrapper[38936]: I0216 21:39:27.221101 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-svc\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.221403 master-0 kubenswrapper[38936]: I0216 21:39:27.221235 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-config\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.221403 master-0 kubenswrapper[38936]: I0216 21:39:27.221322 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-config-data\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.221403 master-0 kubenswrapper[38936]: I0216 21:39:27.221356 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:27.230707 master-0 kubenswrapper[38936]: I0216 21:39:27.230620 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.231375 master-0 kubenswrapper[38936]: I0216 21:39:27.231348 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:27.232552 master-0 kubenswrapper[38936]: I0216 21:39:27.232500 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7c9cc07-f5c6-45ae-96e1-9645de742311-logs\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.240088 master-0 kubenswrapper[38936]: I0216 21:39:27.240051 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8jb9\" (UniqueName: \"kubernetes.io/projected/85424a8b-db5b-47c1-8b31-86ebb9f6484f-kube-api-access-r8jb9\") pod \"nova-scheduler-0\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:27.240255 master-0 kubenswrapper[38936]: I0216 21:39:27.240128 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsddh\" (UniqueName: \"kubernetes.io/projected/c7c9cc07-f5c6-45ae-96e1-9645de742311-kube-api-access-lsddh\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.245754 master-0 kubenswrapper[38936]: I0216 21:39:27.245665 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-config-data\") pod \"nova-metadata-0\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " pod="openstack/nova-metadata-0" Feb 16 21:39:27.255855 master-0 kubenswrapper[38936]: I0216 21:39:27.255816 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-config-data\") pod \"nova-scheduler-0\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:27.281596 master-0 kubenswrapper[38936]: I0216 21:39:27.281500 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:39:27.300126 master-0 kubenswrapper[38936]: I0216 21:39:27.300065 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-d25bz"] Feb 16 21:39:27.311671 master-0 kubenswrapper[38936]: I0216 21:39:27.310710 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:39:27.327801 master-0 kubenswrapper[38936]: I0216 21:39:27.326880 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9ppd\" (UniqueName: \"kubernetes.io/projected/b96444d1-ae55-4560-a3a6-b75072a1271f-kube-api-access-f9ppd\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.327801 master-0 kubenswrapper[38936]: I0216 21:39:27.326971 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-sb\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.327801 master-0 kubenswrapper[38936]: I0216 21:39:27.327512 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-swift-storage-0\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.328092 master-0 kubenswrapper[38936]: I0216 21:39:27.327951 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-svc\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.328277 master-0 kubenswrapper[38936]: I0216 21:39:27.328141 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-config\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.328452 master-0 kubenswrapper[38936]: I0216 21:39:27.328393 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-nb\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.329495 master-0 kubenswrapper[38936]: I0216 21:39:27.329461 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-nb\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.330302 master-0 kubenswrapper[38936]: I0216 21:39:27.329748 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-svc\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.330302 master-0 kubenswrapper[38936]: I0216 21:39:27.329953 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-config\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.330302 master-0 kubenswrapper[38936]: I0216 21:39:27.330089 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-sb\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.334668 master-0 kubenswrapper[38936]: I0216 21:39:27.334570 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-swift-storage-0\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.345913 master-0 kubenswrapper[38936]: I0216 21:39:27.345787 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9ppd\" (UniqueName: \"kubernetes.io/projected/b96444d1-ae55-4560-a3a6-b75072a1271f-kube-api-access-f9ppd\") pod \"dnsmasq-dns-846fc68895-n6hmv\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.379085 master-0 kubenswrapper[38936]: I0216 21:39:27.377183 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:27.387502 master-0 kubenswrapper[38936]: W0216 21:39:27.386778 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf26172c1_371c_4d1d_b026_80e4ebe31568.slice/crio-8ebac5388e5c783430540cf44f57868472bf8863f9135ba04681ae06ac38b38e WatchSource:0}: Error finding container 8ebac5388e5c783430540cf44f57868472bf8863f9135ba04681ae06ac38b38e: Status 404 returned error can't find the container with id 8ebac5388e5c783430540cf44f57868472bf8863f9135ba04681ae06ac38b38e Feb 16 21:39:27.476009 master-0 kubenswrapper[38936]: I0216 21:39:27.475941 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 16 21:39:27.523936 master-0 kubenswrapper[38936]: I0216 21:39:27.503191 38936 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 21:39:27.614972 master-0 kubenswrapper[38936]: I0216 21:39:27.611010 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5vr4r"] Feb 16 21:39:27.614972 master-0 kubenswrapper[38936]: I0216 21:39:27.613548 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.628855 master-0 kubenswrapper[38936]: I0216 21:39:27.628775 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 21:39:27.629507 master-0 kubenswrapper[38936]: I0216 21:39:27.629001 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 21:39:27.654092 master-0 kubenswrapper[38936]: I0216 21:39:27.651702 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhfks\" (UniqueName: \"kubernetes.io/projected/f308b178-bf97-40ee-8754-fd2a13d6242f-kube-api-access-bhfks\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.654092 master-0 kubenswrapper[38936]: I0216 21:39:27.652024 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-config-data\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.654092 master-0 kubenswrapper[38936]: I0216 21:39:27.652083 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-scripts\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.654092 master-0 kubenswrapper[38936]: I0216 21:39:27.652118 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.672803 master-0 kubenswrapper[38936]: I0216 21:39:27.672611 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5vr4r"] Feb 16 21:39:27.748372 master-0 kubenswrapper[38936]: I0216 21:39:27.748319 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:39:27.766522 master-0 kubenswrapper[38936]: I0216 21:39:27.765861 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-config-data\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.766522 master-0 kubenswrapper[38936]: I0216 21:39:27.765924 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-scripts\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.766522 master-0 kubenswrapper[38936]: I0216 21:39:27.765953 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.766522 master-0 kubenswrapper[38936]: I0216 21:39:27.766061 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhfks\" (UniqueName: \"kubernetes.io/projected/f308b178-bf97-40ee-8754-fd2a13d6242f-kube-api-access-bhfks\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.772220 master-0 kubenswrapper[38936]: I0216 21:39:27.770793 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-scripts\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.772932 master-0 kubenswrapper[38936]: I0216 21:39:27.772878 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:39:27.773428 master-0 kubenswrapper[38936]: I0216 21:39:27.773386 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.787901 master-0 kubenswrapper[38936]: I0216 21:39:27.786488 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhfks\" (UniqueName: \"kubernetes.io/projected/f308b178-bf97-40ee-8754-fd2a13d6242f-kube-api-access-bhfks\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.792467 master-0 kubenswrapper[38936]: I0216 21:39:27.792424 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-config-data\") pod \"nova-cell1-conductor-db-sync-5vr4r\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:27.983602 master-0 kubenswrapper[38936]: I0216 21:39:27.982897 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:28.020789 master-0 kubenswrapper[38936]: I0216 21:39:28.020720 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:28.033450 master-0 kubenswrapper[38936]: W0216 21:39:28.030891 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7c9cc07_f5c6_45ae_96e1_9645de742311.slice/crio-b3c420999cd7a06c69d82ef99910e8fef40a1efa6b0f9af556575a377751f591 WatchSource:0}: Error finding container b3c420999cd7a06c69d82ef99910e8fef40a1efa6b0f9af556575a377751f591: Status 404 returned error can't find the container with id b3c420999cd7a06c69d82ef99910e8fef40a1efa6b0f9af556575a377751f591 Feb 16 21:39:28.047486 master-0 kubenswrapper[38936]: I0216 21:39:28.046079 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:39:28.229285 master-0 kubenswrapper[38936]: I0216 21:39:28.228766 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-846fc68895-n6hmv"] Feb 16 21:39:28.275063 master-0 kubenswrapper[38936]: I0216 21:39:28.275005 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"c1759520-fd09-489c-bea6-209d8c1144d2","Type":"ContainerStarted","Data":"c2936a8993bb00f03b781ce386c776d64ec28d6eb7304681c7088c773c02672b"} Feb 16 21:39:28.277597 master-0 kubenswrapper[38936]: I0216 21:39:28.277526 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99","Type":"ContainerStarted","Data":"18c400c39bf9086a19f7bad0235c52ca7b564f5f0ea426d23314692c0ba45404"} Feb 16 21:39:28.278977 master-0 kubenswrapper[38936]: I0216 21:39:28.278934 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"85424a8b-db5b-47c1-8b31-86ebb9f6484f","Type":"ContainerStarted","Data":"06031cbf05fd3f13eda85643aee243eba4ef1407f7db5ce8ac7ced964d9eaeaa"} Feb 16 21:39:28.280261 master-0 kubenswrapper[38936]: I0216 21:39:28.280213 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7c9cc07-f5c6-45ae-96e1-9645de742311","Type":"ContainerStarted","Data":"b3c420999cd7a06c69d82ef99910e8fef40a1efa6b0f9af556575a377751f591"} Feb 16 21:39:28.288105 master-0 kubenswrapper[38936]: I0216 21:39:28.287053 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d25bz" event={"ID":"f26172c1-371c-4d1d-b026-80e4ebe31568","Type":"ContainerStarted","Data":"704805fe4527ea953cd8c5a9a4770e2135573ed97401e05ab821317c983f4869"} Feb 16 21:39:28.288105 master-0 kubenswrapper[38936]: I0216 21:39:28.287098 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d25bz" event={"ID":"f26172c1-371c-4d1d-b026-80e4ebe31568","Type":"ContainerStarted","Data":"8ebac5388e5c783430540cf44f57868472bf8863f9135ba04681ae06ac38b38e"} Feb 16 21:39:28.292303 master-0 kubenswrapper[38936]: I0216 21:39:28.292265 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" event={"ID":"b96444d1-ae55-4560-a3a6-b75072a1271f","Type":"ContainerStarted","Data":"afc7db148ac623c2f22708c190b1060eec6a127b0446b9f5504c5861a7677ffa"} Feb 16 21:39:28.295233 master-0 kubenswrapper[38936]: I0216 21:39:28.295095 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d890f5a-b817-4e9e-bc23-18e1bf326d3b","Type":"ContainerStarted","Data":"f763ca60a7e82000050702b3da71059329d7528fe9eeb603010b9410d854b0bc"} Feb 16 21:39:28.325757 master-0 kubenswrapper[38936]: I0216 21:39:28.325636 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-d25bz" podStartSLOduration=2.325609299 podStartE2EDuration="2.325609299s" podCreationTimestamp="2026-02-16 21:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:28.311323008 +0000 UTC m=+998.663326370" watchObservedRunningTime="2026-02-16 21:39:28.325609299 +0000 UTC m=+998.677612671" Feb 16 21:39:29.000048 master-0 kubenswrapper[38936]: I0216 21:39:28.997825 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5vr4r"] Feb 16 21:39:29.318961 master-0 kubenswrapper[38936]: I0216 21:39:29.318828 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5vr4r" event={"ID":"f308b178-bf97-40ee-8754-fd2a13d6242f","Type":"ContainerStarted","Data":"b9325dc3de236981a821d3aebfca4f200f5c4b5e8eaae8b6951c3fa23af608dc"} Feb 16 21:39:29.322905 master-0 kubenswrapper[38936]: I0216 21:39:29.322414 38936 generic.go:334] "Generic (PLEG): container finished" podID="b96444d1-ae55-4560-a3a6-b75072a1271f" containerID="e70ce9c3dfc993acf95be182d1a9ec783e77d96e6f7a06bb96f10f9f63ca467f" exitCode=0 Feb 16 21:39:29.324045 master-0 kubenswrapper[38936]: I0216 21:39:29.323991 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" event={"ID":"b96444d1-ae55-4560-a3a6-b75072a1271f","Type":"ContainerDied","Data":"e70ce9c3dfc993acf95be182d1a9ec783e77d96e6f7a06bb96f10f9f63ca467f"} Feb 16 21:39:31.356928 master-0 kubenswrapper[38936]: I0216 21:39:31.356872 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" event={"ID":"b96444d1-ae55-4560-a3a6-b75072a1271f","Type":"ContainerStarted","Data":"3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9"} Feb 16 21:39:32.414302 master-0 kubenswrapper[38936]: I0216 21:39:32.414192 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5vr4r" event={"ID":"f308b178-bf97-40ee-8754-fd2a13d6242f","Type":"ContainerStarted","Data":"134d4e20b6a1df2ce0594b11035592abf9bc606635db583ad369f02d00637e65"} Feb 16 21:39:32.414302 master-0 kubenswrapper[38936]: I0216 21:39:32.414268 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:32.488772 master-0 kubenswrapper[38936]: I0216 21:39:32.488627 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" podStartSLOduration=6.488601952 podStartE2EDuration="6.488601952s" podCreationTimestamp="2026-02-16 21:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:32.452024355 +0000 UTC m=+1002.804027717" watchObservedRunningTime="2026-02-16 21:39:32.488601952 +0000 UTC m=+1002.840605334" Feb 16 21:39:32.505165 master-0 kubenswrapper[38936]: I0216 21:39:32.505026 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5vr4r" podStartSLOduration=5.504999671 podStartE2EDuration="5.504999671s" podCreationTimestamp="2026-02-16 21:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:32.496214876 +0000 UTC m=+1002.848218238" watchObservedRunningTime="2026-02-16 21:39:32.504999671 +0000 UTC m=+1002.857003033" Feb 16 21:39:32.876081 master-0 kubenswrapper[38936]: I0216 21:39:32.876013 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:32.894865 master-0 kubenswrapper[38936]: I0216 21:39:32.893384 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:39:33.440808 master-0 kubenswrapper[38936]: I0216 21:39:33.440575 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"85424a8b-db5b-47c1-8b31-86ebb9f6484f","Type":"ContainerStarted","Data":"8531eca939a4f510c4addd3ce37801629199fca68669cc70e148d2d9ddc39708"} Feb 16 21:39:33.447148 master-0 kubenswrapper[38936]: I0216 21:39:33.447075 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7c9cc07-f5c6-45ae-96e1-9645de742311","Type":"ContainerStarted","Data":"eeec254b0379d43597b407007ab37c7a023f5baf0de9ae47b558dadd37241c75"} Feb 16 21:39:33.447148 master-0 kubenswrapper[38936]: I0216 21:39:33.447150 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7c9cc07-f5c6-45ae-96e1-9645de742311","Type":"ContainerStarted","Data":"470d23741df96a01287cde08c9a2859ac687ae00865a4c757a06c718e667e150"} Feb 16 21:39:33.447344 master-0 kubenswrapper[38936]: I0216 21:39:33.447313 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c7c9cc07-f5c6-45ae-96e1-9645de742311" containerName="nova-metadata-log" containerID="cri-o://470d23741df96a01287cde08c9a2859ac687ae00865a4c757a06c718e667e150" gracePeriod=30 Feb 16 21:39:33.447826 master-0 kubenswrapper[38936]: I0216 21:39:33.447593 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c7c9cc07-f5c6-45ae-96e1-9645de742311" containerName="nova-metadata-metadata" containerID="cri-o://eeec254b0379d43597b407007ab37c7a023f5baf0de9ae47b558dadd37241c75" gracePeriod=30 Feb 16 21:39:33.458355 master-0 kubenswrapper[38936]: I0216 21:39:33.452767 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d890f5a-b817-4e9e-bc23-18e1bf326d3b","Type":"ContainerStarted","Data":"81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc"} Feb 16 21:39:33.458355 master-0 kubenswrapper[38936]: I0216 21:39:33.452848 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d890f5a-b817-4e9e-bc23-18e1bf326d3b","Type":"ContainerStarted","Data":"bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e"} Feb 16 21:39:33.459171 master-0 kubenswrapper[38936]: I0216 21:39:33.459118 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515" gracePeriod=30 Feb 16 21:39:33.459298 master-0 kubenswrapper[38936]: I0216 21:39:33.459271 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99","Type":"ContainerStarted","Data":"5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515"} Feb 16 21:39:33.494214 master-0 kubenswrapper[38936]: I0216 21:39:33.494097 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.097875307 podStartE2EDuration="7.489074735s" podCreationTimestamp="2026-02-16 21:39:26 +0000 UTC" firstStartedPulling="2026-02-16 21:39:28.074742709 +0000 UTC m=+998.426746071" lastFinishedPulling="2026-02-16 21:39:32.465942137 +0000 UTC m=+1002.817945499" observedRunningTime="2026-02-16 21:39:33.466445911 +0000 UTC m=+1003.818449263" watchObservedRunningTime="2026-02-16 21:39:33.489074735 +0000 UTC m=+1003.841078097" Feb 16 21:39:33.519562 master-0 kubenswrapper[38936]: I0216 21:39:33.519429 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.5452914570000003 podStartE2EDuration="7.519405175s" podCreationTimestamp="2026-02-16 21:39:26 +0000 UTC" firstStartedPulling="2026-02-16 21:39:27.772713491 +0000 UTC m=+998.124716853" lastFinishedPulling="2026-02-16 21:39:31.746827209 +0000 UTC m=+1002.098830571" observedRunningTime="2026-02-16 21:39:33.497767437 +0000 UTC m=+1003.849770799" watchObservedRunningTime="2026-02-16 21:39:33.519405175 +0000 UTC m=+1003.871408537" Feb 16 21:39:33.568083 master-0 kubenswrapper[38936]: I0216 21:39:33.567978 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.5836371099999997 podStartE2EDuration="7.567959962s" podCreationTimestamp="2026-02-16 21:39:26 +0000 UTC" firstStartedPulling="2026-02-16 21:39:27.76968761 +0000 UTC m=+998.121690972" lastFinishedPulling="2026-02-16 21:39:31.754010452 +0000 UTC m=+1002.106013824" observedRunningTime="2026-02-16 21:39:33.520492694 +0000 UTC m=+1003.872496056" watchObservedRunningTime="2026-02-16 21:39:33.567959962 +0000 UTC m=+1003.919963324" Feb 16 21:39:33.615641 master-0 kubenswrapper[38936]: I0216 21:39:33.615534 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.194289831 podStartE2EDuration="7.615471431s" podCreationTimestamp="2026-02-16 21:39:26 +0000 UTC" firstStartedPulling="2026-02-16 21:39:28.035627214 +0000 UTC m=+998.387630576" lastFinishedPulling="2026-02-16 21:39:32.456808814 +0000 UTC m=+1002.808812176" observedRunningTime="2026-02-16 21:39:33.547075664 +0000 UTC m=+1003.899079026" watchObservedRunningTime="2026-02-16 21:39:33.615471431 +0000 UTC m=+1003.967474793" Feb 16 21:39:34.476719 master-0 kubenswrapper[38936]: I0216 21:39:34.476636 38936 generic.go:334] "Generic (PLEG): container finished" podID="c7c9cc07-f5c6-45ae-96e1-9645de742311" containerID="eeec254b0379d43597b407007ab37c7a023f5baf0de9ae47b558dadd37241c75" exitCode=0 Feb 16 21:39:34.476719 master-0 kubenswrapper[38936]: I0216 21:39:34.476709 38936 generic.go:334] "Generic (PLEG): container finished" podID="c7c9cc07-f5c6-45ae-96e1-9645de742311" containerID="470d23741df96a01287cde08c9a2859ac687ae00865a4c757a06c718e667e150" exitCode=143 Feb 16 21:39:34.477328 master-0 kubenswrapper[38936]: I0216 21:39:34.477234 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7c9cc07-f5c6-45ae-96e1-9645de742311","Type":"ContainerDied","Data":"eeec254b0379d43597b407007ab37c7a023f5baf0de9ae47b558dadd37241c75"} Feb 16 21:39:34.477622 master-0 kubenswrapper[38936]: I0216 21:39:34.477587 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7c9cc07-f5c6-45ae-96e1-9645de742311","Type":"ContainerDied","Data":"470d23741df96a01287cde08c9a2859ac687ae00865a4c757a06c718e667e150"} Feb 16 21:39:34.477702 master-0 kubenswrapper[38936]: I0216 21:39:34.477622 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7c9cc07-f5c6-45ae-96e1-9645de742311","Type":"ContainerDied","Data":"b3c420999cd7a06c69d82ef99910e8fef40a1efa6b0f9af556575a377751f591"} Feb 16 21:39:34.478322 master-0 kubenswrapper[38936]: I0216 21:39:34.477636 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3c420999cd7a06c69d82ef99910e8fef40a1efa6b0f9af556575a377751f591" Feb 16 21:39:34.512738 master-0 kubenswrapper[38936]: I0216 21:39:34.512698 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:39:34.649215 master-0 kubenswrapper[38936]: I0216 21:39:34.649056 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-config-data\") pod \"c7c9cc07-f5c6-45ae-96e1-9645de742311\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " Feb 16 21:39:34.649812 master-0 kubenswrapper[38936]: I0216 21:39:34.649771 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsddh\" (UniqueName: \"kubernetes.io/projected/c7c9cc07-f5c6-45ae-96e1-9645de742311-kube-api-access-lsddh\") pod \"c7c9cc07-f5c6-45ae-96e1-9645de742311\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " Feb 16 21:39:34.649876 master-0 kubenswrapper[38936]: I0216 21:39:34.649834 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7c9cc07-f5c6-45ae-96e1-9645de742311-logs\") pod \"c7c9cc07-f5c6-45ae-96e1-9645de742311\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " Feb 16 21:39:34.650119 master-0 kubenswrapper[38936]: I0216 21:39:34.650082 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-combined-ca-bundle\") pod \"c7c9cc07-f5c6-45ae-96e1-9645de742311\" (UID: \"c7c9cc07-f5c6-45ae-96e1-9645de742311\") " Feb 16 21:39:34.651636 master-0 kubenswrapper[38936]: I0216 21:39:34.650773 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7c9cc07-f5c6-45ae-96e1-9645de742311-logs" (OuterVolumeSpecName: "logs") pod "c7c9cc07-f5c6-45ae-96e1-9645de742311" (UID: "c7c9cc07-f5c6-45ae-96e1-9645de742311"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:39:34.651636 master-0 kubenswrapper[38936]: I0216 21:39:34.651512 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7c9cc07-f5c6-45ae-96e1-9645de742311-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:34.653938 master-0 kubenswrapper[38936]: I0216 21:39:34.653886 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7c9cc07-f5c6-45ae-96e1-9645de742311-kube-api-access-lsddh" (OuterVolumeSpecName: "kube-api-access-lsddh") pod "c7c9cc07-f5c6-45ae-96e1-9645de742311" (UID: "c7c9cc07-f5c6-45ae-96e1-9645de742311"). InnerVolumeSpecName "kube-api-access-lsddh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:39:34.687307 master-0 kubenswrapper[38936]: I0216 21:39:34.687209 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-config-data" (OuterVolumeSpecName: "config-data") pod "c7c9cc07-f5c6-45ae-96e1-9645de742311" (UID: "c7c9cc07-f5c6-45ae-96e1-9645de742311"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:34.690826 master-0 kubenswrapper[38936]: I0216 21:39:34.690750 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7c9cc07-f5c6-45ae-96e1-9645de742311" (UID: "c7c9cc07-f5c6-45ae-96e1-9645de742311"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:34.753496 master-0 kubenswrapper[38936]: I0216 21:39:34.753432 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsddh\" (UniqueName: \"kubernetes.io/projected/c7c9cc07-f5c6-45ae-96e1-9645de742311-kube-api-access-lsddh\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:34.753747 master-0 kubenswrapper[38936]: I0216 21:39:34.753735 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:34.753855 master-0 kubenswrapper[38936]: I0216 21:39:34.753839 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7c9cc07-f5c6-45ae-96e1-9645de742311-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:35.489593 master-0 kubenswrapper[38936]: I0216 21:39:35.489391 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:39:35.810448 master-0 kubenswrapper[38936]: I0216 21:39:35.810377 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:35.833863 master-0 kubenswrapper[38936]: I0216 21:39:35.833733 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:35.890113 master-0 kubenswrapper[38936]: I0216 21:39:35.889615 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7c9cc07-f5c6-45ae-96e1-9645de742311" path="/var/lib/kubelet/pods/c7c9cc07-f5c6-45ae-96e1-9645de742311/volumes" Feb 16 21:39:35.991978 master-0 kubenswrapper[38936]: I0216 21:39:35.988546 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:35.991978 master-0 kubenswrapper[38936]: E0216 21:39:35.989373 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c9cc07-f5c6-45ae-96e1-9645de742311" containerName="nova-metadata-log" Feb 16 21:39:35.991978 master-0 kubenswrapper[38936]: I0216 21:39:35.989395 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c9cc07-f5c6-45ae-96e1-9645de742311" containerName="nova-metadata-log" Feb 16 21:39:35.991978 master-0 kubenswrapper[38936]: E0216 21:39:35.989417 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c9cc07-f5c6-45ae-96e1-9645de742311" containerName="nova-metadata-metadata" Feb 16 21:39:35.991978 master-0 kubenswrapper[38936]: I0216 21:39:35.989426 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c9cc07-f5c6-45ae-96e1-9645de742311" containerName="nova-metadata-metadata" Feb 16 21:39:35.991978 master-0 kubenswrapper[38936]: I0216 21:39:35.989812 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7c9cc07-f5c6-45ae-96e1-9645de742311" containerName="nova-metadata-log" Feb 16 21:39:35.991978 master-0 kubenswrapper[38936]: I0216 21:39:35.989894 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7c9cc07-f5c6-45ae-96e1-9645de742311" containerName="nova-metadata-metadata" Feb 16 21:39:35.993662 master-0 kubenswrapper[38936]: I0216 21:39:35.992696 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:39:35.995588 master-0 kubenswrapper[38936]: I0216 21:39:35.994658 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:39:35.995588 master-0 kubenswrapper[38936]: I0216 21:39:35.995356 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 21:39:36.006847 master-0 kubenswrapper[38936]: I0216 21:39:36.006005 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:36.086402 master-0 kubenswrapper[38936]: I0216 21:39:36.086327 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx6mp\" (UniqueName: \"kubernetes.io/projected/bf3acf12-aba2-443e-9e32-0cc5b867ca51-kube-api-access-kx6mp\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.086663 master-0 kubenswrapper[38936]: I0216 21:39:36.086466 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf3acf12-aba2-443e-9e32-0cc5b867ca51-logs\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.086663 master-0 kubenswrapper[38936]: I0216 21:39:36.086525 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.087029 master-0 kubenswrapper[38936]: I0216 21:39:36.086941 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.087264 master-0 kubenswrapper[38936]: I0216 21:39:36.087238 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-config-data\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.190279 master-0 kubenswrapper[38936]: I0216 21:39:36.190206 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx6mp\" (UniqueName: \"kubernetes.io/projected/bf3acf12-aba2-443e-9e32-0cc5b867ca51-kube-api-access-kx6mp\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.190510 master-0 kubenswrapper[38936]: I0216 21:39:36.190342 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf3acf12-aba2-443e-9e32-0cc5b867ca51-logs\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.190510 master-0 kubenswrapper[38936]: I0216 21:39:36.190417 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.190510 master-0 kubenswrapper[38936]: I0216 21:39:36.190497 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.190624 master-0 kubenswrapper[38936]: I0216 21:39:36.190541 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-config-data\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.191456 master-0 kubenswrapper[38936]: I0216 21:39:36.191412 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf3acf12-aba2-443e-9e32-0cc5b867ca51-logs\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.194594 master-0 kubenswrapper[38936]: I0216 21:39:36.194559 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-config-data\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.200355 master-0 kubenswrapper[38936]: I0216 21:39:36.200221 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.208497 master-0 kubenswrapper[38936]: I0216 21:39:36.208448 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx6mp\" (UniqueName: \"kubernetes.io/projected/bf3acf12-aba2-443e-9e32-0cc5b867ca51-kube-api-access-kx6mp\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.211069 master-0 kubenswrapper[38936]: I0216 21:39:36.211027 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " pod="openstack/nova-metadata-0" Feb 16 21:39:36.374149 master-0 kubenswrapper[38936]: I0216 21:39:36.373987 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:39:36.504291 master-0 kubenswrapper[38936]: I0216 21:39:36.503970 38936 generic.go:334] "Generic (PLEG): container finished" podID="f26172c1-371c-4d1d-b026-80e4ebe31568" containerID="704805fe4527ea953cd8c5a9a4770e2135573ed97401e05ab821317c983f4869" exitCode=0 Feb 16 21:39:36.504291 master-0 kubenswrapper[38936]: I0216 21:39:36.504078 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d25bz" event={"ID":"f26172c1-371c-4d1d-b026-80e4ebe31568","Type":"ContainerDied","Data":"704805fe4527ea953cd8c5a9a4770e2135573ed97401e05ab821317c983f4869"} Feb 16 21:39:36.903595 master-0 kubenswrapper[38936]: I0216 21:39:36.903502 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:39:36.903595 master-0 kubenswrapper[38936]: I0216 21:39:36.903601 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:39:36.990804 master-0 kubenswrapper[38936]: I0216 21:39:36.990732 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:39:37.282392 master-0 kubenswrapper[38936]: I0216 21:39:37.282329 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 21:39:37.282892 master-0 kubenswrapper[38936]: I0216 21:39:37.282871 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 21:39:37.330306 master-0 kubenswrapper[38936]: I0216 21:39:37.326100 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 21:39:37.383485 master-0 kubenswrapper[38936]: I0216 21:39:37.383423 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:39:37.579679 master-0 kubenswrapper[38936]: I0216 21:39:37.579380 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-765cf7b859-fnh5l"] Feb 16 21:39:37.580304 master-0 kubenswrapper[38936]: I0216 21:39:37.579707 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" podUID="e9c9ee25-a478-4932-ae3f-39162f313e62" containerName="dnsmasq-dns" containerID="cri-o://ed0fa4d5633f0dc5f43cff371a242595017805baf64082d7c4e5351c4abd058a" gracePeriod=10 Feb 16 21:39:37.593614 master-0 kubenswrapper[38936]: I0216 21:39:37.593160 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 21:39:37.986034 master-0 kubenswrapper[38936]: I0216 21:39:37.985892 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:39:37.986034 master-0 kubenswrapper[38936]: I0216 21:39:37.985937 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:39:38.553793 master-0 kubenswrapper[38936]: I0216 21:39:38.553720 38936 generic.go:334] "Generic (PLEG): container finished" podID="e9c9ee25-a478-4932-ae3f-39162f313e62" containerID="ed0fa4d5633f0dc5f43cff371a242595017805baf64082d7c4e5351c4abd058a" exitCode=0 Feb 16 21:39:38.554047 master-0 kubenswrapper[38936]: I0216 21:39:38.553799 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" event={"ID":"e9c9ee25-a478-4932-ae3f-39162f313e62","Type":"ContainerDied","Data":"ed0fa4d5633f0dc5f43cff371a242595017805baf64082d7c4e5351c4abd058a"} Feb 16 21:39:40.087794 master-0 kubenswrapper[38936]: I0216 21:39:40.087702 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" podUID="e9c9ee25-a478-4932-ae3f-39162f313e62" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.253:5353: connect: connection refused" Feb 16 21:39:41.302718 master-0 kubenswrapper[38936]: I0216 21:39:41.302289 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:41.358989 master-0 kubenswrapper[38936]: I0216 21:39:41.358878 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89jkh\" (UniqueName: \"kubernetes.io/projected/f26172c1-371c-4d1d-b026-80e4ebe31568-kube-api-access-89jkh\") pod \"f26172c1-371c-4d1d-b026-80e4ebe31568\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " Feb 16 21:39:41.364508 master-0 kubenswrapper[38936]: I0216 21:39:41.359230 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-combined-ca-bundle\") pod \"f26172c1-371c-4d1d-b026-80e4ebe31568\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " Feb 16 21:39:41.364508 master-0 kubenswrapper[38936]: I0216 21:39:41.359372 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-config-data\") pod \"f26172c1-371c-4d1d-b026-80e4ebe31568\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " Feb 16 21:39:41.364508 master-0 kubenswrapper[38936]: I0216 21:39:41.359435 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-scripts\") pod \"f26172c1-371c-4d1d-b026-80e4ebe31568\" (UID: \"f26172c1-371c-4d1d-b026-80e4ebe31568\") " Feb 16 21:39:41.364733 master-0 kubenswrapper[38936]: I0216 21:39:41.364536 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-scripts" (OuterVolumeSpecName: "scripts") pod "f26172c1-371c-4d1d-b026-80e4ebe31568" (UID: "f26172c1-371c-4d1d-b026-80e4ebe31568"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:41.368627 master-0 kubenswrapper[38936]: I0216 21:39:41.368526 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26172c1-371c-4d1d-b026-80e4ebe31568-kube-api-access-89jkh" (OuterVolumeSpecName: "kube-api-access-89jkh") pod "f26172c1-371c-4d1d-b026-80e4ebe31568" (UID: "f26172c1-371c-4d1d-b026-80e4ebe31568"). InnerVolumeSpecName "kube-api-access-89jkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:39:41.378528 master-0 kubenswrapper[38936]: I0216 21:39:41.378479 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:39:41.403773 master-0 kubenswrapper[38936]: I0216 21:39:41.403457 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f26172c1-371c-4d1d-b026-80e4ebe31568" (UID: "f26172c1-371c-4d1d-b026-80e4ebe31568"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:41.422517 master-0 kubenswrapper[38936]: I0216 21:39:41.422048 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-config-data" (OuterVolumeSpecName: "config-data") pod "f26172c1-371c-4d1d-b026-80e4ebe31568" (UID: "f26172c1-371c-4d1d-b026-80e4ebe31568"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:41.462032 master-0 kubenswrapper[38936]: I0216 21:39:41.461590 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-swift-storage-0\") pod \"e9c9ee25-a478-4932-ae3f-39162f313e62\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " Feb 16 21:39:41.462032 master-0 kubenswrapper[38936]: I0216 21:39:41.461780 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29pc4\" (UniqueName: \"kubernetes.io/projected/e9c9ee25-a478-4932-ae3f-39162f313e62-kube-api-access-29pc4\") pod \"e9c9ee25-a478-4932-ae3f-39162f313e62\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " Feb 16 21:39:41.462032 master-0 kubenswrapper[38936]: I0216 21:39:41.461958 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-svc\") pod \"e9c9ee25-a478-4932-ae3f-39162f313e62\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " Feb 16 21:39:41.462202 master-0 kubenswrapper[38936]: I0216 21:39:41.462104 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-config\") pod \"e9c9ee25-a478-4932-ae3f-39162f313e62\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " Feb 16 21:39:41.463895 master-0 kubenswrapper[38936]: I0216 21:39:41.462252 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-sb\") pod \"e9c9ee25-a478-4932-ae3f-39162f313e62\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " Feb 16 21:39:41.463895 master-0 kubenswrapper[38936]: I0216 21:39:41.462318 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-nb\") pod \"e9c9ee25-a478-4932-ae3f-39162f313e62\" (UID: \"e9c9ee25-a478-4932-ae3f-39162f313e62\") " Feb 16 21:39:41.463895 master-0 kubenswrapper[38936]: I0216 21:39:41.462978 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89jkh\" (UniqueName: \"kubernetes.io/projected/f26172c1-371c-4d1d-b026-80e4ebe31568-kube-api-access-89jkh\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:41.463895 master-0 kubenswrapper[38936]: I0216 21:39:41.462995 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:41.463895 master-0 kubenswrapper[38936]: I0216 21:39:41.463004 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:41.463895 master-0 kubenswrapper[38936]: I0216 21:39:41.463014 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f26172c1-371c-4d1d-b026-80e4ebe31568-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:41.469778 master-0 kubenswrapper[38936]: I0216 21:39:41.469434 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c9ee25-a478-4932-ae3f-39162f313e62-kube-api-access-29pc4" (OuterVolumeSpecName: "kube-api-access-29pc4") pod "e9c9ee25-a478-4932-ae3f-39162f313e62" (UID: "e9c9ee25-a478-4932-ae3f-39162f313e62"). InnerVolumeSpecName "kube-api-access-29pc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:39:41.534825 master-0 kubenswrapper[38936]: I0216 21:39:41.534587 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e9c9ee25-a478-4932-ae3f-39162f313e62" (UID: "e9c9ee25-a478-4932-ae3f-39162f313e62"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:39:41.541716 master-0 kubenswrapper[38936]: I0216 21:39:41.541574 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:41.543289 master-0 kubenswrapper[38936]: I0216 21:39:41.543233 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9c9ee25-a478-4932-ae3f-39162f313e62" (UID: "e9c9ee25-a478-4932-ae3f-39162f313e62"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:39:41.547860 master-0 kubenswrapper[38936]: I0216 21:39:41.547795 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e9c9ee25-a478-4932-ae3f-39162f313e62" (UID: "e9c9ee25-a478-4932-ae3f-39162f313e62"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:39:41.553631 master-0 kubenswrapper[38936]: W0216 21:39:41.553576 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf3acf12_aba2_443e_9e32_0cc5b867ca51.slice/crio-7b2f0bebc07f6df938a98aaba4dc256f9795a779df18fb66f99ed91c42820dab WatchSource:0}: Error finding container 7b2f0bebc07f6df938a98aaba4dc256f9795a779df18fb66f99ed91c42820dab: Status 404 returned error can't find the container with id 7b2f0bebc07f6df938a98aaba4dc256f9795a779df18fb66f99ed91c42820dab Feb 16 21:39:41.561113 master-0 kubenswrapper[38936]: I0216 21:39:41.561063 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-config" (OuterVolumeSpecName: "config") pod "e9c9ee25-a478-4932-ae3f-39162f313e62" (UID: "e9c9ee25-a478-4932-ae3f-39162f313e62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:39:41.569697 master-0 kubenswrapper[38936]: I0216 21:39:41.569627 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:41.569697 master-0 kubenswrapper[38936]: I0216 21:39:41.569700 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:41.569845 master-0 kubenswrapper[38936]: I0216 21:39:41.569714 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:41.569845 master-0 kubenswrapper[38936]: I0216 21:39:41.569730 38936 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:41.569845 master-0 kubenswrapper[38936]: I0216 21:39:41.569745 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29pc4\" (UniqueName: \"kubernetes.io/projected/e9c9ee25-a478-4932-ae3f-39162f313e62-kube-api-access-29pc4\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:41.577893 master-0 kubenswrapper[38936]: I0216 21:39:41.577829 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e9c9ee25-a478-4932-ae3f-39162f313e62" (UID: "e9c9ee25-a478-4932-ae3f-39162f313e62"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:39:41.597866 master-0 kubenswrapper[38936]: I0216 21:39:41.597793 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d25bz" event={"ID":"f26172c1-371c-4d1d-b026-80e4ebe31568","Type":"ContainerDied","Data":"8ebac5388e5c783430540cf44f57868472bf8863f9135ba04681ae06ac38b38e"} Feb 16 21:39:41.597866 master-0 kubenswrapper[38936]: I0216 21:39:41.597867 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ebac5388e5c783430540cf44f57868472bf8863f9135ba04681ae06ac38b38e" Feb 16 21:39:41.598140 master-0 kubenswrapper[38936]: I0216 21:39:41.597890 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d25bz" Feb 16 21:39:41.600290 master-0 kubenswrapper[38936]: I0216 21:39:41.600244 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf3acf12-aba2-443e-9e32-0cc5b867ca51","Type":"ContainerStarted","Data":"7b2f0bebc07f6df938a98aaba4dc256f9795a779df18fb66f99ed91c42820dab"} Feb 16 21:39:41.602661 master-0 kubenswrapper[38936]: I0216 21:39:41.602594 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" event={"ID":"e9c9ee25-a478-4932-ae3f-39162f313e62","Type":"ContainerDied","Data":"72693dec909c5055c5ead3de470a7a67ee584a3e58ddbdbe95598448fe1658f7"} Feb 16 21:39:41.602735 master-0 kubenswrapper[38936]: I0216 21:39:41.602610 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-765cf7b859-fnh5l" Feb 16 21:39:41.602773 master-0 kubenswrapper[38936]: I0216 21:39:41.602677 38936 scope.go:117] "RemoveContainer" containerID="ed0fa4d5633f0dc5f43cff371a242595017805baf64082d7c4e5351c4abd058a" Feb 16 21:39:41.608377 master-0 kubenswrapper[38936]: I0216 21:39:41.608327 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"c1759520-fd09-489c-bea6-209d8c1144d2","Type":"ContainerStarted","Data":"6c44f50f1249051afa87efcceb0076fc766989f0103b385a59f583c63998f6ed"} Feb 16 21:39:41.608816 master-0 kubenswrapper[38936]: I0216 21:39:41.608771 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:41.654626 master-0 kubenswrapper[38936]: I0216 21:39:41.654581 38936 scope.go:117] "RemoveContainer" containerID="ecad561e587b0e24d4e18e6cb83d0c2c69999e3669008b1c804b7a261f1ff885" Feb 16 21:39:41.668627 master-0 kubenswrapper[38936]: I0216 21:39:41.668581 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 21:39:41.680089 master-0 kubenswrapper[38936]: I0216 21:39:41.674807 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9c9ee25-a478-4932-ae3f-39162f313e62-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:41.694865 master-0 kubenswrapper[38936]: I0216 21:39:41.694764 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.13806741 podStartE2EDuration="15.694741328s" podCreationTimestamp="2026-02-16 21:39:26 +0000 UTC" firstStartedPulling="2026-02-16 21:39:27.503111161 +0000 UTC m=+997.855114513" lastFinishedPulling="2026-02-16 21:39:41.059785069 +0000 UTC m=+1011.411788431" observedRunningTime="2026-02-16 21:39:41.651460132 +0000 UTC m=+1012.003463504" watchObservedRunningTime="2026-02-16 21:39:41.694741328 +0000 UTC m=+1012.046744690" Feb 16 21:39:41.695892 master-0 kubenswrapper[38936]: I0216 21:39:41.695838 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-765cf7b859-fnh5l"] Feb 16 21:39:41.717128 master-0 kubenswrapper[38936]: I0216 21:39:41.711184 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-765cf7b859-fnh5l"] Feb 16 21:39:41.886575 master-0 kubenswrapper[38936]: I0216 21:39:41.886507 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c9ee25-a478-4932-ae3f-39162f313e62" path="/var/lib/kubelet/pods/e9c9ee25-a478-4932-ae3f-39162f313e62/volumes" Feb 16 21:39:42.566828 master-0 kubenswrapper[38936]: I0216 21:39:42.566752 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:39:42.567470 master-0 kubenswrapper[38936]: I0216 21:39:42.567059 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerName="nova-api-log" containerID="cri-o://bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e" gracePeriod=30 Feb 16 21:39:42.567826 master-0 kubenswrapper[38936]: I0216 21:39:42.567663 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerName="nova-api-api" containerID="cri-o://81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc" gracePeriod=30 Feb 16 21:39:42.587342 master-0 kubenswrapper[38936]: I0216 21:39:42.587257 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:39:42.587990 master-0 kubenswrapper[38936]: I0216 21:39:42.587592 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="85424a8b-db5b-47c1-8b31-86ebb9f6484f" containerName="nova-scheduler-scheduler" containerID="cri-o://8531eca939a4f510c4addd3ce37801629199fca68669cc70e148d2d9ddc39708" gracePeriod=30 Feb 16 21:39:42.631313 master-0 kubenswrapper[38936]: I0216 21:39:42.626353 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf3acf12-aba2-443e-9e32-0cc5b867ca51","Type":"ContainerStarted","Data":"10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537"} Feb 16 21:39:42.631313 master-0 kubenswrapper[38936]: I0216 21:39:42.626423 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf3acf12-aba2-443e-9e32-0cc5b867ca51","Type":"ContainerStarted","Data":"b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0"} Feb 16 21:39:42.639231 master-0 kubenswrapper[38936]: I0216 21:39:42.639173 38936 generic.go:334] "Generic (PLEG): container finished" podID="37c815ef-1c3d-4b2a-b748-de04b8c4412c" containerID="50a94c27c885aa35c9bd973e857a3c4de6c450ddd454bcb361b38f85d44c5553" exitCode=0 Feb 16 21:39:42.640433 master-0 kubenswrapper[38936]: I0216 21:39:42.640394 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"37c815ef-1c3d-4b2a-b748-de04b8c4412c","Type":"ContainerDied","Data":"50a94c27c885aa35c9bd973e857a3c4de6c450ddd454bcb361b38f85d44c5553"} Feb 16 21:39:42.641289 master-0 kubenswrapper[38936]: I0216 21:39:42.641193 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:42.680770 master-0 kubenswrapper[38936]: I0216 21:39:42.677331 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=7.677306082 podStartE2EDuration="7.677306082s" podCreationTimestamp="2026-02-16 21:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:42.660043642 +0000 UTC m=+1013.012047014" watchObservedRunningTime="2026-02-16 21:39:42.677306082 +0000 UTC m=+1013.029309444" Feb 16 21:39:43.662532 master-0 kubenswrapper[38936]: I0216 21:39:43.662460 38936 generic.go:334] "Generic (PLEG): container finished" podID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerID="bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e" exitCode=143 Feb 16 21:39:43.663002 master-0 kubenswrapper[38936]: I0216 21:39:43.662546 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d890f5a-b817-4e9e-bc23-18e1bf326d3b","Type":"ContainerDied","Data":"bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e"} Feb 16 21:39:43.667515 master-0 kubenswrapper[38936]: I0216 21:39:43.667384 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"37c815ef-1c3d-4b2a-b748-de04b8c4412c","Type":"ContainerStarted","Data":"db7cf4bc4986b13aefe700d47a536bf8ac2ab044c1826f6149d5194f8a603a0a"} Feb 16 21:39:43.670178 master-0 kubenswrapper[38936]: I0216 21:39:43.670137 38936 generic.go:334] "Generic (PLEG): container finished" podID="f308b178-bf97-40ee-8754-fd2a13d6242f" containerID="134d4e20b6a1df2ce0594b11035592abf9bc606635db583ad369f02d00637e65" exitCode=0 Feb 16 21:39:43.670968 master-0 kubenswrapper[38936]: I0216 21:39:43.670817 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5vr4r" event={"ID":"f308b178-bf97-40ee-8754-fd2a13d6242f","Type":"ContainerDied","Data":"134d4e20b6a1df2ce0594b11035592abf9bc606635db583ad369f02d00637e65"} Feb 16 21:39:44.699189 master-0 kubenswrapper[38936]: I0216 21:39:44.699126 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"37c815ef-1c3d-4b2a-b748-de04b8c4412c","Type":"ContainerStarted","Data":"33974bbdd3bfbba64bc3b3523e0713f6ba8d38cf50714315151b6affb7d0902c"} Feb 16 21:39:44.699189 master-0 kubenswrapper[38936]: I0216 21:39:44.699177 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"37c815ef-1c3d-4b2a-b748-de04b8c4412c","Type":"ContainerStarted","Data":"e32a7ab0f4c8cc7b89e082e195c055f30e0f1d5e6576f32ba9b70558ee5be6d0"} Feb 16 21:39:44.699862 master-0 kubenswrapper[38936]: I0216 21:39:44.699346 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 16 21:39:44.713768 master-0 kubenswrapper[38936]: I0216 21:39:44.711704 38936 generic.go:334] "Generic (PLEG): container finished" podID="85424a8b-db5b-47c1-8b31-86ebb9f6484f" containerID="8531eca939a4f510c4addd3ce37801629199fca68669cc70e148d2d9ddc39708" exitCode=0 Feb 16 21:39:44.713768 master-0 kubenswrapper[38936]: I0216 21:39:44.712061 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf3acf12-aba2-443e-9e32-0cc5b867ca51" containerName="nova-metadata-log" containerID="cri-o://b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0" gracePeriod=30 Feb 16 21:39:44.713768 master-0 kubenswrapper[38936]: I0216 21:39:44.712162 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"85424a8b-db5b-47c1-8b31-86ebb9f6484f","Type":"ContainerDied","Data":"8531eca939a4f510c4addd3ce37801629199fca68669cc70e148d2d9ddc39708"} Feb 16 21:39:44.713768 master-0 kubenswrapper[38936]: I0216 21:39:44.712294 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bf3acf12-aba2-443e-9e32-0cc5b867ca51" containerName="nova-metadata-metadata" containerID="cri-o://10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537" gracePeriod=30 Feb 16 21:39:44.780392 master-0 kubenswrapper[38936]: I0216 21:39:44.780312 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=60.952167091 podStartE2EDuration="1m41.780285253s" podCreationTimestamp="2026-02-16 21:38:03 +0000 UTC" firstStartedPulling="2026-02-16 21:38:15.177009757 +0000 UTC m=+925.529013119" lastFinishedPulling="2026-02-16 21:38:56.005127919 +0000 UTC m=+966.357131281" observedRunningTime="2026-02-16 21:39:44.755979974 +0000 UTC m=+1015.107983336" watchObservedRunningTime="2026-02-16 21:39:44.780285253 +0000 UTC m=+1015.132288615" Feb 16 21:39:45.132832 master-0 kubenswrapper[38936]: I0216 21:39:45.132613 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:39:45.196379 master-0 kubenswrapper[38936]: I0216 21:39:45.183192 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:45.269870 master-0 kubenswrapper[38936]: I0216 21:39:45.269151 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-combined-ca-bundle\") pod \"f308b178-bf97-40ee-8754-fd2a13d6242f\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " Feb 16 21:39:45.269870 master-0 kubenswrapper[38936]: I0216 21:39:45.269247 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-scripts\") pod \"f308b178-bf97-40ee-8754-fd2a13d6242f\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " Feb 16 21:39:45.269870 master-0 kubenswrapper[38936]: I0216 21:39:45.269280 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8jb9\" (UniqueName: \"kubernetes.io/projected/85424a8b-db5b-47c1-8b31-86ebb9f6484f-kube-api-access-r8jb9\") pod \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " Feb 16 21:39:45.269870 master-0 kubenswrapper[38936]: I0216 21:39:45.269453 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhfks\" (UniqueName: \"kubernetes.io/projected/f308b178-bf97-40ee-8754-fd2a13d6242f-kube-api-access-bhfks\") pod \"f308b178-bf97-40ee-8754-fd2a13d6242f\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " Feb 16 21:39:45.269870 master-0 kubenswrapper[38936]: I0216 21:39:45.269807 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-config-data\") pod \"f308b178-bf97-40ee-8754-fd2a13d6242f\" (UID: \"f308b178-bf97-40ee-8754-fd2a13d6242f\") " Feb 16 21:39:45.269870 master-0 kubenswrapper[38936]: I0216 21:39:45.269840 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-config-data\") pod \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " Feb 16 21:39:45.269870 master-0 kubenswrapper[38936]: I0216 21:39:45.269858 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-combined-ca-bundle\") pod \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\" (UID: \"85424a8b-db5b-47c1-8b31-86ebb9f6484f\") " Feb 16 21:39:45.274394 master-0 kubenswrapper[38936]: I0216 21:39:45.274272 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85424a8b-db5b-47c1-8b31-86ebb9f6484f-kube-api-access-r8jb9" (OuterVolumeSpecName: "kube-api-access-r8jb9") pod "85424a8b-db5b-47c1-8b31-86ebb9f6484f" (UID: "85424a8b-db5b-47c1-8b31-86ebb9f6484f"). InnerVolumeSpecName "kube-api-access-r8jb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:39:45.275467 master-0 kubenswrapper[38936]: I0216 21:39:45.274744 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f308b178-bf97-40ee-8754-fd2a13d6242f-kube-api-access-bhfks" (OuterVolumeSpecName: "kube-api-access-bhfks") pod "f308b178-bf97-40ee-8754-fd2a13d6242f" (UID: "f308b178-bf97-40ee-8754-fd2a13d6242f"). InnerVolumeSpecName "kube-api-access-bhfks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:39:45.278771 master-0 kubenswrapper[38936]: I0216 21:39:45.275705 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-scripts" (OuterVolumeSpecName: "scripts") pod "f308b178-bf97-40ee-8754-fd2a13d6242f" (UID: "f308b178-bf97-40ee-8754-fd2a13d6242f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:45.303697 master-0 kubenswrapper[38936]: I0216 21:39:45.303602 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85424a8b-db5b-47c1-8b31-86ebb9f6484f" (UID: "85424a8b-db5b-47c1-8b31-86ebb9f6484f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:45.304779 master-0 kubenswrapper[38936]: I0216 21:39:45.304713 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f308b178-bf97-40ee-8754-fd2a13d6242f" (UID: "f308b178-bf97-40ee-8754-fd2a13d6242f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:45.306398 master-0 kubenswrapper[38936]: I0216 21:39:45.306312 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-config-data" (OuterVolumeSpecName: "config-data") pod "85424a8b-db5b-47c1-8b31-86ebb9f6484f" (UID: "85424a8b-db5b-47c1-8b31-86ebb9f6484f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:45.309391 master-0 kubenswrapper[38936]: I0216 21:39:45.309324 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-config-data" (OuterVolumeSpecName: "config-data") pod "f308b178-bf97-40ee-8754-fd2a13d6242f" (UID: "f308b178-bf97-40ee-8754-fd2a13d6242f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:45.373304 master-0 kubenswrapper[38936]: I0216 21:39:45.373215 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.373304 master-0 kubenswrapper[38936]: I0216 21:39:45.373268 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.373304 master-0 kubenswrapper[38936]: I0216 21:39:45.373284 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85424a8b-db5b-47c1-8b31-86ebb9f6484f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.373304 master-0 kubenswrapper[38936]: I0216 21:39:45.373305 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.373304 master-0 kubenswrapper[38936]: I0216 21:39:45.373313 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f308b178-bf97-40ee-8754-fd2a13d6242f-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.373304 master-0 kubenswrapper[38936]: I0216 21:39:45.373335 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8jb9\" (UniqueName: \"kubernetes.io/projected/85424a8b-db5b-47c1-8b31-86ebb9f6484f-kube-api-access-r8jb9\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.373861 master-0 kubenswrapper[38936]: I0216 21:39:45.373347 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhfks\" (UniqueName: \"kubernetes.io/projected/f308b178-bf97-40ee-8754-fd2a13d6242f-kube-api-access-bhfks\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.455716 master-0 kubenswrapper[38936]: I0216 21:39:45.455551 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:39:45.577666 master-0 kubenswrapper[38936]: I0216 21:39:45.577532 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-config-data\") pod \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " Feb 16 21:39:45.578082 master-0 kubenswrapper[38936]: I0216 21:39:45.577737 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-nova-metadata-tls-certs\") pod \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " Feb 16 21:39:45.578082 master-0 kubenswrapper[38936]: I0216 21:39:45.577879 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf3acf12-aba2-443e-9e32-0cc5b867ca51-logs\") pod \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " Feb 16 21:39:45.578082 master-0 kubenswrapper[38936]: I0216 21:39:45.577980 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-combined-ca-bundle\") pod \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " Feb 16 21:39:45.578082 master-0 kubenswrapper[38936]: I0216 21:39:45.578051 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx6mp\" (UniqueName: \"kubernetes.io/projected/bf3acf12-aba2-443e-9e32-0cc5b867ca51-kube-api-access-kx6mp\") pod \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\" (UID: \"bf3acf12-aba2-443e-9e32-0cc5b867ca51\") " Feb 16 21:39:45.582815 master-0 kubenswrapper[38936]: I0216 21:39:45.582777 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3acf12-aba2-443e-9e32-0cc5b867ca51-kube-api-access-kx6mp" (OuterVolumeSpecName: "kube-api-access-kx6mp") pod "bf3acf12-aba2-443e-9e32-0cc5b867ca51" (UID: "bf3acf12-aba2-443e-9e32-0cc5b867ca51"). InnerVolumeSpecName "kube-api-access-kx6mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:39:45.583177 master-0 kubenswrapper[38936]: I0216 21:39:45.583150 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf3acf12-aba2-443e-9e32-0cc5b867ca51-logs" (OuterVolumeSpecName: "logs") pod "bf3acf12-aba2-443e-9e32-0cc5b867ca51" (UID: "bf3acf12-aba2-443e-9e32-0cc5b867ca51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:39:45.618611 master-0 kubenswrapper[38936]: I0216 21:39:45.617776 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-config-data" (OuterVolumeSpecName: "config-data") pod "bf3acf12-aba2-443e-9e32-0cc5b867ca51" (UID: "bf3acf12-aba2-443e-9e32-0cc5b867ca51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:45.642411 master-0 kubenswrapper[38936]: I0216 21:39:45.642335 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf3acf12-aba2-443e-9e32-0cc5b867ca51" (UID: "bf3acf12-aba2-443e-9e32-0cc5b867ca51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:45.651463 master-0 kubenswrapper[38936]: I0216 21:39:45.651424 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bf3acf12-aba2-443e-9e32-0cc5b867ca51" (UID: "bf3acf12-aba2-443e-9e32-0cc5b867ca51"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:45.682046 master-0 kubenswrapper[38936]: I0216 21:39:45.681987 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.682046 master-0 kubenswrapper[38936]: I0216 21:39:45.682040 38936 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.682046 master-0 kubenswrapper[38936]: I0216 21:39:45.682057 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf3acf12-aba2-443e-9e32-0cc5b867ca51-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.682313 master-0 kubenswrapper[38936]: I0216 21:39:45.682071 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf3acf12-aba2-443e-9e32-0cc5b867ca51-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.682313 master-0 kubenswrapper[38936]: I0216 21:39:45.682084 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx6mp\" (UniqueName: \"kubernetes.io/projected/bf3acf12-aba2-443e-9e32-0cc5b867ca51-kube-api-access-kx6mp\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:45.724164 master-0 kubenswrapper[38936]: I0216 21:39:45.723986 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5vr4r" event={"ID":"f308b178-bf97-40ee-8754-fd2a13d6242f","Type":"ContainerDied","Data":"b9325dc3de236981a821d3aebfca4f200f5c4b5e8eaae8b6951c3fa23af608dc"} Feb 16 21:39:45.724164 master-0 kubenswrapper[38936]: I0216 21:39:45.724051 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9325dc3de236981a821d3aebfca4f200f5c4b5e8eaae8b6951c3fa23af608dc" Feb 16 21:39:45.724164 master-0 kubenswrapper[38936]: I0216 21:39:45.724063 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5vr4r" Feb 16 21:39:45.725678 master-0 kubenswrapper[38936]: I0216 21:39:45.725638 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"85424a8b-db5b-47c1-8b31-86ebb9f6484f","Type":"ContainerDied","Data":"06031cbf05fd3f13eda85643aee243eba4ef1407f7db5ce8ac7ced964d9eaeaa"} Feb 16 21:39:45.725752 master-0 kubenswrapper[38936]: I0216 21:39:45.725679 38936 scope.go:117] "RemoveContainer" containerID="8531eca939a4f510c4addd3ce37801629199fca68669cc70e148d2d9ddc39708" Feb 16 21:39:45.725752 master-0 kubenswrapper[38936]: I0216 21:39:45.725690 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:39:45.734116 master-0 kubenswrapper[38936]: I0216 21:39:45.733886 38936 generic.go:334] "Generic (PLEG): container finished" podID="bf3acf12-aba2-443e-9e32-0cc5b867ca51" containerID="10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537" exitCode=0 Feb 16 21:39:45.734116 master-0 kubenswrapper[38936]: I0216 21:39:45.733959 38936 generic.go:334] "Generic (PLEG): container finished" podID="bf3acf12-aba2-443e-9e32-0cc5b867ca51" containerID="b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0" exitCode=143 Feb 16 21:39:45.734116 master-0 kubenswrapper[38936]: I0216 21:39:45.733960 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:39:45.734116 master-0 kubenswrapper[38936]: I0216 21:39:45.734000 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf3acf12-aba2-443e-9e32-0cc5b867ca51","Type":"ContainerDied","Data":"10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537"} Feb 16 21:39:45.734116 master-0 kubenswrapper[38936]: I0216 21:39:45.734066 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf3acf12-aba2-443e-9e32-0cc5b867ca51","Type":"ContainerDied","Data":"b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0"} Feb 16 21:39:45.734116 master-0 kubenswrapper[38936]: I0216 21:39:45.734086 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bf3acf12-aba2-443e-9e32-0cc5b867ca51","Type":"ContainerDied","Data":"7b2f0bebc07f6df938a98aaba4dc256f9795a779df18fb66f99ed91c42820dab"} Feb 16 21:39:45.734742 master-0 kubenswrapper[38936]: I0216 21:39:45.734688 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 16 21:39:45.759119 master-0 kubenswrapper[38936]: I0216 21:39:45.758945 38936 scope.go:117] "RemoveContainer" containerID="10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537" Feb 16 21:39:45.830166 master-0 kubenswrapper[38936]: I0216 21:39:45.830134 38936 scope.go:117] "RemoveContainer" containerID="b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0" Feb 16 21:39:45.855431 master-0 kubenswrapper[38936]: I0216 21:39:45.855347 38936 scope.go:117] "RemoveContainer" containerID="10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537" Feb 16 21:39:45.856136 master-0 kubenswrapper[38936]: E0216 21:39:45.856091 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537\": container with ID starting with 10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537 not found: ID does not exist" containerID="10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537" Feb 16 21:39:45.856205 master-0 kubenswrapper[38936]: I0216 21:39:45.856153 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537"} err="failed to get container status \"10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537\": rpc error: code = NotFound desc = could not find container \"10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537\": container with ID starting with 10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537 not found: ID does not exist" Feb 16 21:39:45.856205 master-0 kubenswrapper[38936]: I0216 21:39:45.856182 38936 scope.go:117] "RemoveContainer" containerID="b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0" Feb 16 21:39:45.856560 master-0 kubenswrapper[38936]: E0216 21:39:45.856530 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0\": container with ID starting with b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0 not found: ID does not exist" containerID="b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0" Feb 16 21:39:45.856683 master-0 kubenswrapper[38936]: I0216 21:39:45.856637 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0"} err="failed to get container status \"b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0\": rpc error: code = NotFound desc = could not find container \"b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0\": container with ID starting with b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0 not found: ID does not exist" Feb 16 21:39:45.856767 master-0 kubenswrapper[38936]: I0216 21:39:45.856754 38936 scope.go:117] "RemoveContainer" containerID="10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537" Feb 16 21:39:45.857123 master-0 kubenswrapper[38936]: I0216 21:39:45.857104 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537"} err="failed to get container status \"10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537\": rpc error: code = NotFound desc = could not find container \"10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537\": container with ID starting with 10442ee258e552f0d708adb487807dd4079db062afc8c00b0f21efceb5462537 not found: ID does not exist" Feb 16 21:39:45.857209 master-0 kubenswrapper[38936]: I0216 21:39:45.857197 38936 scope.go:117] "RemoveContainer" containerID="b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0" Feb 16 21:39:45.857498 master-0 kubenswrapper[38936]: I0216 21:39:45.857476 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0"} err="failed to get container status \"b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0\": rpc error: code = NotFound desc = could not find container \"b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0\": container with ID starting with b7a1aaf9b676a05dfcdf881396c1ff98d982226db449698acadf112b9f5473b0 not found: ID does not exist" Feb 16 21:39:45.928638 master-0 kubenswrapper[38936]: I0216 21:39:45.928497 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:39:45.951330 master-0 kubenswrapper[38936]: I0216 21:39:45.949717 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.963714 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: E0216 21:39:45.964370 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c9ee25-a478-4932-ae3f-39162f313e62" containerName="dnsmasq-dns" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964392 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c9ee25-a478-4932-ae3f-39162f313e62" containerName="dnsmasq-dns" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: E0216 21:39:45.964428 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3acf12-aba2-443e-9e32-0cc5b867ca51" containerName="nova-metadata-log" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964435 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3acf12-aba2-443e-9e32-0cc5b867ca51" containerName="nova-metadata-log" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: E0216 21:39:45.964453 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c9ee25-a478-4932-ae3f-39162f313e62" containerName="init" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964460 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c9ee25-a478-4932-ae3f-39162f313e62" containerName="init" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: E0216 21:39:45.964486 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f308b178-bf97-40ee-8754-fd2a13d6242f" containerName="nova-cell1-conductor-db-sync" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964493 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="f308b178-bf97-40ee-8754-fd2a13d6242f" containerName="nova-cell1-conductor-db-sync" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: E0216 21:39:45.964510 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85424a8b-db5b-47c1-8b31-86ebb9f6484f" containerName="nova-scheduler-scheduler" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964516 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="85424a8b-db5b-47c1-8b31-86ebb9f6484f" containerName="nova-scheduler-scheduler" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: E0216 21:39:45.964527 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26172c1-371c-4d1d-b026-80e4ebe31568" containerName="nova-manage" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964533 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26172c1-371c-4d1d-b026-80e4ebe31568" containerName="nova-manage" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: E0216 21:39:45.964558 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3acf12-aba2-443e-9e32-0cc5b867ca51" containerName="nova-metadata-metadata" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964564 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3acf12-aba2-443e-9e32-0cc5b867ca51" containerName="nova-metadata-metadata" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964880 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c9ee25-a478-4932-ae3f-39162f313e62" containerName="dnsmasq-dns" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964906 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26172c1-371c-4d1d-b026-80e4ebe31568" containerName="nova-manage" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964926 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="f308b178-bf97-40ee-8754-fd2a13d6242f" containerName="nova-cell1-conductor-db-sync" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964940 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="85424a8b-db5b-47c1-8b31-86ebb9f6484f" containerName="nova-scheduler-scheduler" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964960 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3acf12-aba2-443e-9e32-0cc5b867ca51" containerName="nova-metadata-metadata" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.964982 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3acf12-aba2-443e-9e32-0cc5b867ca51" containerName="nova-metadata-log" Feb 16 21:39:45.967560 master-0 kubenswrapper[38936]: I0216 21:39:45.965929 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:39:45.970808 master-0 kubenswrapper[38936]: I0216 21:39:45.968609 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 21:39:45.984312 master-0 kubenswrapper[38936]: I0216 21:39:45.981771 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:45.996069 master-0 kubenswrapper[38936]: I0216 21:39:45.995619 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:45.996069 master-0 kubenswrapper[38936]: I0216 21:39:45.995681 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nb69\" (UniqueName: \"kubernetes.io/projected/90445da2-719c-482a-ab08-8ee50a317377-kube-api-access-8nb69\") pod \"nova-scheduler-0\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:45.996069 master-0 kubenswrapper[38936]: I0216 21:39:45.995784 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-config-data\") pod \"nova-scheduler-0\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:46.009843 master-0 kubenswrapper[38936]: I0216 21:39:45.999775 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:46.016031 master-0 kubenswrapper[38936]: I0216 21:39:46.015957 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:39:46.059832 master-0 kubenswrapper[38936]: I0216 21:39:46.059737 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:46.062138 master-0 kubenswrapper[38936]: I0216 21:39:46.062100 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:39:46.064921 master-0 kubenswrapper[38936]: I0216 21:39:46.064884 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:39:46.065173 master-0 kubenswrapper[38936]: I0216 21:39:46.065148 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 21:39:46.081448 master-0 kubenswrapper[38936]: I0216 21:39:46.081402 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 21:39:46.083770 master-0 kubenswrapper[38936]: I0216 21:39:46.083733 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:46.095317 master-0 kubenswrapper[38936]: I0216 21:39:46.089941 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 21:39:46.100332 master-0 kubenswrapper[38936]: I0216 21:39:46.097239 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.100332 master-0 kubenswrapper[38936]: I0216 21:39:46.097318 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd21264c-8c3d-451f-820c-035f0e8afb27-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cd21264c-8c3d-451f-820c-035f0e8afb27\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:46.100332 master-0 kubenswrapper[38936]: I0216 21:39:46.097347 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-config-data\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.100332 master-0 kubenswrapper[38936]: I0216 21:39:46.097545 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.100332 master-0 kubenswrapper[38936]: I0216 21:39:46.097631 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-config-data\") pod \"nova-scheduler-0\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:46.100332 master-0 kubenswrapper[38936]: I0216 21:39:46.097937 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd21264c-8c3d-451f-820c-035f0e8afb27-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cd21264c-8c3d-451f-820c-035f0e8afb27\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:46.100332 master-0 kubenswrapper[38936]: I0216 21:39:46.097989 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx2ht\" (UniqueName: \"kubernetes.io/projected/cd21264c-8c3d-451f-820c-035f0e8afb27-kube-api-access-gx2ht\") pod \"nova-cell1-conductor-0\" (UID: \"cd21264c-8c3d-451f-820c-035f0e8afb27\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:46.100332 master-0 kubenswrapper[38936]: I0216 21:39:46.098031 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdr8z\" (UniqueName: \"kubernetes.io/projected/6ff2d05d-b1a8-4695-b186-f2422a5c8186-kube-api-access-kdr8z\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.100332 master-0 kubenswrapper[38936]: I0216 21:39:46.098345 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ff2d05d-b1a8-4695-b186-f2422a5c8186-logs\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.100332 master-0 kubenswrapper[38936]: I0216 21:39:46.098401 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:46.100332 master-0 kubenswrapper[38936]: I0216 21:39:46.098419 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nb69\" (UniqueName: \"kubernetes.io/projected/90445da2-719c-482a-ab08-8ee50a317377-kube-api-access-8nb69\") pod \"nova-scheduler-0\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:46.104992 master-0 kubenswrapper[38936]: I0216 21:39:46.103219 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:46.104992 master-0 kubenswrapper[38936]: I0216 21:39:46.104409 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:46.107855 master-0 kubenswrapper[38936]: I0216 21:39:46.107258 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-config-data\") pod \"nova-scheduler-0\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:46.118781 master-0 kubenswrapper[38936]: I0216 21:39:46.118506 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nb69\" (UniqueName: \"kubernetes.io/projected/90445da2-719c-482a-ab08-8ee50a317377-kube-api-access-8nb69\") pod \"nova-scheduler-0\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " pod="openstack/nova-scheduler-0" Feb 16 21:39:46.123694 master-0 kubenswrapper[38936]: I0216 21:39:46.122980 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 21:39:46.199824 master-0 kubenswrapper[38936]: I0216 21:39:46.199740 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd21264c-8c3d-451f-820c-035f0e8afb27-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cd21264c-8c3d-451f-820c-035f0e8afb27\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:46.200054 master-0 kubenswrapper[38936]: I0216 21:39:46.199839 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx2ht\" (UniqueName: \"kubernetes.io/projected/cd21264c-8c3d-451f-820c-035f0e8afb27-kube-api-access-gx2ht\") pod \"nova-cell1-conductor-0\" (UID: \"cd21264c-8c3d-451f-820c-035f0e8afb27\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:46.200054 master-0 kubenswrapper[38936]: I0216 21:39:46.199880 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdr8z\" (UniqueName: \"kubernetes.io/projected/6ff2d05d-b1a8-4695-b186-f2422a5c8186-kube-api-access-kdr8z\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.200054 master-0 kubenswrapper[38936]: I0216 21:39:46.200015 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ff2d05d-b1a8-4695-b186-f2422a5c8186-logs\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.200200 master-0 kubenswrapper[38936]: I0216 21:39:46.200087 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.200200 master-0 kubenswrapper[38936]: I0216 21:39:46.200127 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd21264c-8c3d-451f-820c-035f0e8afb27-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cd21264c-8c3d-451f-820c-035f0e8afb27\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:46.200200 master-0 kubenswrapper[38936]: I0216 21:39:46.200194 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-config-data\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.200310 master-0 kubenswrapper[38936]: I0216 21:39:46.200248 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.205372 master-0 kubenswrapper[38936]: I0216 21:39:46.203169 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ff2d05d-b1a8-4695-b186-f2422a5c8186-logs\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.207369 master-0 kubenswrapper[38936]: I0216 21:39:46.205885 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.207369 master-0 kubenswrapper[38936]: I0216 21:39:46.206888 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd21264c-8c3d-451f-820c-035f0e8afb27-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cd21264c-8c3d-451f-820c-035f0e8afb27\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:46.207369 master-0 kubenswrapper[38936]: I0216 21:39:46.207110 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd21264c-8c3d-451f-820c-035f0e8afb27-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cd21264c-8c3d-451f-820c-035f0e8afb27\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:46.219425 master-0 kubenswrapper[38936]: I0216 21:39:46.219278 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdr8z\" (UniqueName: \"kubernetes.io/projected/6ff2d05d-b1a8-4695-b186-f2422a5c8186-kube-api-access-kdr8z\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.219425 master-0 kubenswrapper[38936]: I0216 21:39:46.219362 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx2ht\" (UniqueName: \"kubernetes.io/projected/cd21264c-8c3d-451f-820c-035f0e8afb27-kube-api-access-gx2ht\") pod \"nova-cell1-conductor-0\" (UID: \"cd21264c-8c3d-451f-820c-035f0e8afb27\") " pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:46.222286 master-0 kubenswrapper[38936]: I0216 21:39:46.222257 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-config-data\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.247745 master-0 kubenswrapper[38936]: I0216 21:39:46.246089 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " pod="openstack/nova-metadata-0" Feb 16 21:39:46.295108 master-0 kubenswrapper[38936]: I0216 21:39:46.294580 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:39:46.377029 master-0 kubenswrapper[38936]: I0216 21:39:46.376913 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Feb 16 21:39:46.383616 master-0 kubenswrapper[38936]: I0216 21:39:46.383516 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:39:46.433667 master-0 kubenswrapper[38936]: I0216 21:39:46.428824 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:46.558537 master-0 kubenswrapper[38936]: I0216 21:39:46.558485 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:39:46.713836 master-0 kubenswrapper[38936]: I0216 21:39:46.712504 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7fxq\" (UniqueName: \"kubernetes.io/projected/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-kube-api-access-x7fxq\") pod \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " Feb 16 21:39:46.713836 master-0 kubenswrapper[38936]: I0216 21:39:46.712577 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-config-data\") pod \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " Feb 16 21:39:46.713836 master-0 kubenswrapper[38936]: I0216 21:39:46.712898 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-logs\") pod \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " Feb 16 21:39:46.713836 master-0 kubenswrapper[38936]: I0216 21:39:46.713011 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-combined-ca-bundle\") pod \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\" (UID: \"5d890f5a-b817-4e9e-bc23-18e1bf326d3b\") " Feb 16 21:39:46.714173 master-0 kubenswrapper[38936]: I0216 21:39:46.713975 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-logs" (OuterVolumeSpecName: "logs") pod "5d890f5a-b817-4e9e-bc23-18e1bf326d3b" (UID: "5d890f5a-b817-4e9e-bc23-18e1bf326d3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:39:46.717511 master-0 kubenswrapper[38936]: I0216 21:39:46.717027 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-kube-api-access-x7fxq" (OuterVolumeSpecName: "kube-api-access-x7fxq") pod "5d890f5a-b817-4e9e-bc23-18e1bf326d3b" (UID: "5d890f5a-b817-4e9e-bc23-18e1bf326d3b"). InnerVolumeSpecName "kube-api-access-x7fxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:39:46.749225 master-0 kubenswrapper[38936]: I0216 21:39:46.749159 38936 generic.go:334] "Generic (PLEG): container finished" podID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerID="81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc" exitCode=0 Feb 16 21:39:46.749814 master-0 kubenswrapper[38936]: I0216 21:39:46.749234 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d890f5a-b817-4e9e-bc23-18e1bf326d3b","Type":"ContainerDied","Data":"81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc"} Feb 16 21:39:46.749814 master-0 kubenswrapper[38936]: I0216 21:39:46.749278 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:39:46.749814 master-0 kubenswrapper[38936]: I0216 21:39:46.749308 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5d890f5a-b817-4e9e-bc23-18e1bf326d3b","Type":"ContainerDied","Data":"f763ca60a7e82000050702b3da71059329d7528fe9eeb603010b9410d854b0bc"} Feb 16 21:39:46.749814 master-0 kubenswrapper[38936]: I0216 21:39:46.749333 38936 scope.go:117] "RemoveContainer" containerID="81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc" Feb 16 21:39:46.751578 master-0 kubenswrapper[38936]: I0216 21:39:46.751534 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d890f5a-b817-4e9e-bc23-18e1bf326d3b" (UID: "5d890f5a-b817-4e9e-bc23-18e1bf326d3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:46.760535 master-0 kubenswrapper[38936]: I0216 21:39:46.760423 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-config-data" (OuterVolumeSpecName: "config-data") pod "5d890f5a-b817-4e9e-bc23-18e1bf326d3b" (UID: "5d890f5a-b817-4e9e-bc23-18e1bf326d3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:39:46.776717 master-0 kubenswrapper[38936]: I0216 21:39:46.776674 38936 scope.go:117] "RemoveContainer" containerID="bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e" Feb 16 21:39:46.815025 master-0 kubenswrapper[38936]: I0216 21:39:46.814974 38936 scope.go:117] "RemoveContainer" containerID="81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc" Feb 16 21:39:46.815456 master-0 kubenswrapper[38936]: I0216 21:39:46.815414 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:46.815532 master-0 kubenswrapper[38936]: I0216 21:39:46.815462 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:46.815532 master-0 kubenswrapper[38936]: I0216 21:39:46.815475 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7fxq\" (UniqueName: \"kubernetes.io/projected/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-kube-api-access-x7fxq\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:46.815532 master-0 kubenswrapper[38936]: I0216 21:39:46.815483 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d890f5a-b817-4e9e-bc23-18e1bf326d3b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:39:46.815532 master-0 kubenswrapper[38936]: E0216 21:39:46.815424 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc\": container with ID starting with 81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc not found: ID does not exist" containerID="81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc" Feb 16 21:39:46.815858 master-0 kubenswrapper[38936]: I0216 21:39:46.815520 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc"} err="failed to get container status \"81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc\": rpc error: code = NotFound desc = could not find container \"81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc\": container with ID starting with 81dd9230174d4ac3996f72a6e4cd78491e42c48e2c684efe07fa98cb386534bc not found: ID does not exist" Feb 16 21:39:46.815858 master-0 kubenswrapper[38936]: I0216 21:39:46.815553 38936 scope.go:117] "RemoveContainer" containerID="bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e" Feb 16 21:39:46.816849 master-0 kubenswrapper[38936]: E0216 21:39:46.816803 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e\": container with ID starting with bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e not found: ID does not exist" containerID="bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e" Feb 16 21:39:46.816912 master-0 kubenswrapper[38936]: I0216 21:39:46.816850 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e"} err="failed to get container status \"bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e\": rpc error: code = NotFound desc = could not find container \"bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e\": container with ID starting with bc1c98fa78af72c0f9710dd5ee4c34b6bac95acb84fc71724b1247e5343f0c6e not found: ID does not exist" Feb 16 21:39:46.857078 master-0 kubenswrapper[38936]: I0216 21:39:46.857026 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:39:46.861667 master-0 kubenswrapper[38936]: W0216 21:39:46.861588 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90445da2_719c_482a_ab08_8ee50a317377.slice/crio-b9c0965915f49445863e9b53a1f1b5be665f401c18f64480d09968063ef10d71 WatchSource:0}: Error finding container b9c0965915f49445863e9b53a1f1b5be665f401c18f64480d09968063ef10d71: Status 404 returned error can't find the container with id b9c0965915f49445863e9b53a1f1b5be665f401c18f64480d09968063ef10d71 Feb 16 21:39:47.021676 master-0 kubenswrapper[38936]: I0216 21:39:47.017939 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 21:39:47.021676 master-0 kubenswrapper[38936]: W0216 21:39:47.021544 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ff2d05d_b1a8_4695_b186_f2422a5c8186.slice/crio-166d52e77aaa169bfcec6074ac9ef632401c90667898ae736116e59464924ff2 WatchSource:0}: Error finding container 166d52e77aaa169bfcec6074ac9ef632401c90667898ae736116e59464924ff2: Status 404 returned error can't find the container with id 166d52e77aaa169bfcec6074ac9ef632401c90667898ae736116e59464924ff2 Feb 16 21:39:47.023499 master-0 kubenswrapper[38936]: W0216 21:39:47.023436 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd21264c_8c3d_451f_820c_035f0e8afb27.slice/crio-e9a1352e43748ca9fd6b43349cfa62076ca046b5d71eb52e3c82b74ebeefa04a WatchSource:0}: Error finding container e9a1352e43748ca9fd6b43349cfa62076ca046b5d71eb52e3c82b74ebeefa04a: Status 404 returned error can't find the container with id e9a1352e43748ca9fd6b43349cfa62076ca046b5d71eb52e3c82b74ebeefa04a Feb 16 21:39:47.051016 master-0 kubenswrapper[38936]: I0216 21:39:47.050942 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:39:47.103530 master-0 kubenswrapper[38936]: I0216 21:39:47.103113 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:39:47.138905 master-0 kubenswrapper[38936]: I0216 21:39:47.136395 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:39:47.160709 master-0 kubenswrapper[38936]: I0216 21:39:47.159774 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:39:47.160709 master-0 kubenswrapper[38936]: E0216 21:39:47.160477 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerName="nova-api-log" Feb 16 21:39:47.160709 master-0 kubenswrapper[38936]: I0216 21:39:47.160500 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerName="nova-api-log" Feb 16 21:39:47.160709 master-0 kubenswrapper[38936]: E0216 21:39:47.160578 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerName="nova-api-api" Feb 16 21:39:47.160709 master-0 kubenswrapper[38936]: I0216 21:39:47.160589 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerName="nova-api-api" Feb 16 21:39:47.162180 master-0 kubenswrapper[38936]: I0216 21:39:47.161668 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerName="nova-api-api" Feb 16 21:39:47.162180 master-0 kubenswrapper[38936]: I0216 21:39:47.161702 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" containerName="nova-api-log" Feb 16 21:39:47.163876 master-0 kubenswrapper[38936]: I0216 21:39:47.163774 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:39:47.171252 master-0 kubenswrapper[38936]: I0216 21:39:47.171206 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:39:47.176464 master-0 kubenswrapper[38936]: I0216 21:39:47.176408 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:39:47.331242 master-0 kubenswrapper[38936]: I0216 21:39:47.330922 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-config-data\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.331242 master-0 kubenswrapper[38936]: I0216 21:39:47.331008 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.331242 master-0 kubenswrapper[38936]: I0216 21:39:47.331059 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfnx9\" (UniqueName: \"kubernetes.io/projected/2535ec49-180c-46b4-85b5-cfe29e147b92-kube-api-access-bfnx9\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.331242 master-0 kubenswrapper[38936]: I0216 21:39:47.331079 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2535ec49-180c-46b4-85b5-cfe29e147b92-logs\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.433792 master-0 kubenswrapper[38936]: I0216 21:39:47.433732 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-config-data\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.434143 master-0 kubenswrapper[38936]: I0216 21:39:47.434121 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.434303 master-0 kubenswrapper[38936]: I0216 21:39:47.434285 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfnx9\" (UniqueName: \"kubernetes.io/projected/2535ec49-180c-46b4-85b5-cfe29e147b92-kube-api-access-bfnx9\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.434409 master-0 kubenswrapper[38936]: I0216 21:39:47.434389 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2535ec49-180c-46b4-85b5-cfe29e147b92-logs\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.435490 master-0 kubenswrapper[38936]: I0216 21:39:47.435469 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2535ec49-180c-46b4-85b5-cfe29e147b92-logs\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.446732 master-0 kubenswrapper[38936]: I0216 21:39:47.438147 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-config-data\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.446732 master-0 kubenswrapper[38936]: I0216 21:39:47.438796 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.469676 master-0 kubenswrapper[38936]: I0216 21:39:47.466625 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfnx9\" (UniqueName: \"kubernetes.io/projected/2535ec49-180c-46b4-85b5-cfe29e147b92-kube-api-access-bfnx9\") pod \"nova-api-0\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " pod="openstack/nova-api-0" Feb 16 21:39:47.706423 master-0 kubenswrapper[38936]: I0216 21:39:47.706357 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:39:47.774844 master-0 kubenswrapper[38936]: I0216 21:39:47.774095 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cd21264c-8c3d-451f-820c-035f0e8afb27","Type":"ContainerStarted","Data":"47cdecfb6eae5dc0fb7ca39eb0c054cb105e4fcfb2dc3a912ad736fc3337dd2c"} Feb 16 21:39:47.774844 master-0 kubenswrapper[38936]: I0216 21:39:47.774851 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:47.775451 master-0 kubenswrapper[38936]: I0216 21:39:47.774866 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cd21264c-8c3d-451f-820c-035f0e8afb27","Type":"ContainerStarted","Data":"e9a1352e43748ca9fd6b43349cfa62076ca046b5d71eb52e3c82b74ebeefa04a"} Feb 16 21:39:47.778750 master-0 kubenswrapper[38936]: I0216 21:39:47.778712 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ff2d05d-b1a8-4695-b186-f2422a5c8186","Type":"ContainerStarted","Data":"b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c"} Feb 16 21:39:47.778750 master-0 kubenswrapper[38936]: I0216 21:39:47.778746 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ff2d05d-b1a8-4695-b186-f2422a5c8186","Type":"ContainerStarted","Data":"166d52e77aaa169bfcec6074ac9ef632401c90667898ae736116e59464924ff2"} Feb 16 21:39:47.785080 master-0 kubenswrapper[38936]: I0216 21:39:47.785027 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"90445da2-719c-482a-ab08-8ee50a317377","Type":"ContainerStarted","Data":"beeb3366c40b7904cbed4ed0893c846080f4c4a6c57567b11352e42c3d6a5646"} Feb 16 21:39:47.785080 master-0 kubenswrapper[38936]: I0216 21:39:47.785084 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"90445da2-719c-482a-ab08-8ee50a317377","Type":"ContainerStarted","Data":"b9c0965915f49445863e9b53a1f1b5be665f401c18f64480d09968063ef10d71"} Feb 16 21:39:47.813632 master-0 kubenswrapper[38936]: I0216 21:39:47.811577 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.811552538 podStartE2EDuration="2.811552538s" podCreationTimestamp="2026-02-16 21:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:47.800476692 +0000 UTC m=+1018.152480054" watchObservedRunningTime="2026-02-16 21:39:47.811552538 +0000 UTC m=+1018.163555900" Feb 16 21:39:47.857674 master-0 kubenswrapper[38936]: I0216 21:39:47.857527 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 16 21:39:47.865404 master-0 kubenswrapper[38936]: I0216 21:39:47.865086 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8640641909999998 podStartE2EDuration="2.864064191s" podCreationTimestamp="2026-02-16 21:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:47.831909862 +0000 UTC m=+1018.183913224" watchObservedRunningTime="2026-02-16 21:39:47.864064191 +0000 UTC m=+1018.216067553" Feb 16 21:39:47.914387 master-0 kubenswrapper[38936]: I0216 21:39:47.914336 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d890f5a-b817-4e9e-bc23-18e1bf326d3b" path="/var/lib/kubelet/pods/5d890f5a-b817-4e9e-bc23-18e1bf326d3b/volumes" Feb 16 21:39:47.915539 master-0 kubenswrapper[38936]: I0216 21:39:47.915482 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85424a8b-db5b-47c1-8b31-86ebb9f6484f" path="/var/lib/kubelet/pods/85424a8b-db5b-47c1-8b31-86ebb9f6484f/volumes" Feb 16 21:39:47.916561 master-0 kubenswrapper[38936]: I0216 21:39:47.916503 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf3acf12-aba2-443e-9e32-0cc5b867ca51" path="/var/lib/kubelet/pods/bf3acf12-aba2-443e-9e32-0cc5b867ca51/volumes" Feb 16 21:39:48.143463 master-0 kubenswrapper[38936]: I0216 21:39:48.143318 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Feb 16 21:39:48.267731 master-0 kubenswrapper[38936]: W0216 21:39:48.265743 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2535ec49_180c_46b4_85b5_cfe29e147b92.slice/crio-1a394e157d98e573c2ff4f8bbaded8727a81ee45af1cb2ea6bb8ea58f4cba9ee WatchSource:0}: Error finding container 1a394e157d98e573c2ff4f8bbaded8727a81ee45af1cb2ea6bb8ea58f4cba9ee: Status 404 returned error can't find the container with id 1a394e157d98e573c2ff4f8bbaded8727a81ee45af1cb2ea6bb8ea58f4cba9ee Feb 16 21:39:48.267731 master-0 kubenswrapper[38936]: I0216 21:39:48.267560 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:39:48.802842 master-0 kubenswrapper[38936]: I0216 21:39:48.802760 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2535ec49-180c-46b4-85b5-cfe29e147b92","Type":"ContainerStarted","Data":"ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1"} Feb 16 21:39:48.803478 master-0 kubenswrapper[38936]: I0216 21:39:48.802854 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2535ec49-180c-46b4-85b5-cfe29e147b92","Type":"ContainerStarted","Data":"0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076"} Feb 16 21:39:48.803478 master-0 kubenswrapper[38936]: I0216 21:39:48.802872 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2535ec49-180c-46b4-85b5-cfe29e147b92","Type":"ContainerStarted","Data":"1a394e157d98e573c2ff4f8bbaded8727a81ee45af1cb2ea6bb8ea58f4cba9ee"} Feb 16 21:39:48.805894 master-0 kubenswrapper[38936]: I0216 21:39:48.805822 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ff2d05d-b1a8-4695-b186-f2422a5c8186","Type":"ContainerStarted","Data":"712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf"} Feb 16 21:39:48.808330 master-0 kubenswrapper[38936]: I0216 21:39:48.808285 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 16 21:39:48.826479 master-0 kubenswrapper[38936]: I0216 21:39:48.826401 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.826377175 podStartE2EDuration="1.826377175s" podCreationTimestamp="2026-02-16 21:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:48.824520865 +0000 UTC m=+1019.176524227" watchObservedRunningTime="2026-02-16 21:39:48.826377175 +0000 UTC m=+1019.178380537" Feb 16 21:39:48.855821 master-0 kubenswrapper[38936]: I0216 21:39:48.855505 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.855485342 podStartE2EDuration="3.855485342s" podCreationTimestamp="2026-02-16 21:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:39:48.843315697 +0000 UTC m=+1019.195319059" watchObservedRunningTime="2026-02-16 21:39:48.855485342 +0000 UTC m=+1019.207488704" Feb 16 21:39:51.294864 master-0 kubenswrapper[38936]: I0216 21:39:51.294769 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 21:39:51.384499 master-0 kubenswrapper[38936]: I0216 21:39:51.384414 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:39:51.384499 master-0 kubenswrapper[38936]: I0216 21:39:51.384488 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:39:56.294932 master-0 kubenswrapper[38936]: I0216 21:39:56.294840 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 21:39:56.325485 master-0 kubenswrapper[38936]: I0216 21:39:56.325437 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 21:39:56.383694 master-0 kubenswrapper[38936]: I0216 21:39:56.383623 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:39:56.384005 master-0 kubenswrapper[38936]: I0216 21:39:56.383985 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:39:56.469997 master-0 kubenswrapper[38936]: I0216 21:39:56.469930 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 21:39:56.957951 master-0 kubenswrapper[38936]: I0216 21:39:56.957888 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 21:39:57.403107 master-0 kubenswrapper[38936]: I0216 21:39:57.403039 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:39:57.403632 master-0 kubenswrapper[38936]: I0216 21:39:57.403035 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:39:57.708602 master-0 kubenswrapper[38936]: I0216 21:39:57.708530 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:39:57.708760 master-0 kubenswrapper[38936]: I0216 21:39:57.708610 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:39:58.795008 master-0 kubenswrapper[38936]: I0216 21:39:58.789864 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.14:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:39:58.795008 master-0 kubenswrapper[38936]: I0216 21:39:58.789901 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.14:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 21:40:03.966556 master-0 kubenswrapper[38936]: I0216 21:40:03.966495 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.044593 master-0 kubenswrapper[38936]: I0216 21:40:04.044522 38936 generic.go:334] "Generic (PLEG): container finished" podID="6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99" containerID="5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515" exitCode=137 Feb 16 21:40:04.044593 master-0 kubenswrapper[38936]: I0216 21:40:04.044587 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99","Type":"ContainerDied","Data":"5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515"} Feb 16 21:40:04.044972 master-0 kubenswrapper[38936]: I0216 21:40:04.044617 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99","Type":"ContainerDied","Data":"18c400c39bf9086a19f7bad0235c52ca7b564f5f0ea426d23314692c0ba45404"} Feb 16 21:40:04.044972 master-0 kubenswrapper[38936]: I0216 21:40:04.044635 38936 scope.go:117] "RemoveContainer" containerID="5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515" Feb 16 21:40:04.044972 master-0 kubenswrapper[38936]: I0216 21:40:04.044926 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.078299 master-0 kubenswrapper[38936]: I0216 21:40:04.078232 38936 scope.go:117] "RemoveContainer" containerID="5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515" Feb 16 21:40:04.079148 master-0 kubenswrapper[38936]: E0216 21:40:04.079092 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515\": container with ID starting with 5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515 not found: ID does not exist" containerID="5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515" Feb 16 21:40:04.079218 master-0 kubenswrapper[38936]: I0216 21:40:04.079155 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515"} err="failed to get container status \"5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515\": rpc error: code = NotFound desc = could not find container \"5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515\": container with ID starting with 5f8573df1aa692d15579b0a181edc014c6a827d40dd85df1b7444f9c65086515 not found: ID does not exist" Feb 16 21:40:04.145284 master-0 kubenswrapper[38936]: I0216 21:40:04.145064 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-config-data\") pod \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " Feb 16 21:40:04.145284 master-0 kubenswrapper[38936]: I0216 21:40:04.145225 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnsbz\" (UniqueName: \"kubernetes.io/projected/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-kube-api-access-tnsbz\") pod \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " Feb 16 21:40:04.145538 master-0 kubenswrapper[38936]: I0216 21:40:04.145295 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-combined-ca-bundle\") pod \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\" (UID: \"6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99\") " Feb 16 21:40:04.156034 master-0 kubenswrapper[38936]: I0216 21:40:04.155949 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-kube-api-access-tnsbz" (OuterVolumeSpecName: "kube-api-access-tnsbz") pod "6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99" (UID: "6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99"). InnerVolumeSpecName "kube-api-access-tnsbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:04.182120 master-0 kubenswrapper[38936]: I0216 21:40:04.182017 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99" (UID: "6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:04.183902 master-0 kubenswrapper[38936]: I0216 21:40:04.183826 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-config-data" (OuterVolumeSpecName: "config-data") pod "6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99" (UID: "6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:04.249207 master-0 kubenswrapper[38936]: I0216 21:40:04.249150 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:04.249207 master-0 kubenswrapper[38936]: I0216 21:40:04.249194 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnsbz\" (UniqueName: \"kubernetes.io/projected/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-kube-api-access-tnsbz\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:04.249207 master-0 kubenswrapper[38936]: I0216 21:40:04.249206 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:04.388293 master-0 kubenswrapper[38936]: I0216 21:40:04.388209 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:40:04.404640 master-0 kubenswrapper[38936]: I0216 21:40:04.403235 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:40:04.423903 master-0 kubenswrapper[38936]: I0216 21:40:04.423834 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:40:04.424461 master-0 kubenswrapper[38936]: E0216 21:40:04.424438 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 21:40:04.424461 master-0 kubenswrapper[38936]: I0216 21:40:04.424460 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 21:40:04.424768 master-0 kubenswrapper[38936]: I0216 21:40:04.424748 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 21:40:04.425745 master-0 kubenswrapper[38936]: I0216 21:40:04.425717 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.435458 master-0 kubenswrapper[38936]: I0216 21:40:04.435400 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 21:40:04.435674 master-0 kubenswrapper[38936]: I0216 21:40:04.435620 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 21:40:04.436579 master-0 kubenswrapper[38936]: I0216 21:40:04.436525 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 21:40:04.485898 master-0 kubenswrapper[38936]: I0216 21:40:04.481727 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:40:04.558533 master-0 kubenswrapper[38936]: I0216 21:40:04.558460 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.558533 master-0 kubenswrapper[38936]: I0216 21:40:04.558528 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6cln\" (UniqueName: \"kubernetes.io/projected/7b78bd09-1fa0-45b3-9a45-08eaeb942824-kube-api-access-b6cln\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.558834 master-0 kubenswrapper[38936]: I0216 21:40:04.558687 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.558921 master-0 kubenswrapper[38936]: I0216 21:40:04.558894 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.558979 master-0 kubenswrapper[38936]: I0216 21:40:04.558964 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.662107 master-0 kubenswrapper[38936]: I0216 21:40:04.661913 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.662354 master-0 kubenswrapper[38936]: I0216 21:40:04.662175 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6cln\" (UniqueName: \"kubernetes.io/projected/7b78bd09-1fa0-45b3-9a45-08eaeb942824-kube-api-access-b6cln\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.662354 master-0 kubenswrapper[38936]: I0216 21:40:04.662345 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.662622 master-0 kubenswrapper[38936]: I0216 21:40:04.662556 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.662697 master-0 kubenswrapper[38936]: I0216 21:40:04.662631 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.669478 master-0 kubenswrapper[38936]: I0216 21:40:04.669419 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.669579 master-0 kubenswrapper[38936]: I0216 21:40:04.669498 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.669579 master-0 kubenswrapper[38936]: I0216 21:40:04.669550 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.680970 master-0 kubenswrapper[38936]: I0216 21:40:04.680894 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b78bd09-1fa0-45b3-9a45-08eaeb942824-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.685958 master-0 kubenswrapper[38936]: I0216 21:40:04.685895 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6cln\" (UniqueName: \"kubernetes.io/projected/7b78bd09-1fa0-45b3-9a45-08eaeb942824-kube-api-access-b6cln\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b78bd09-1fa0-45b3-9a45-08eaeb942824\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:04.748097 master-0 kubenswrapper[38936]: I0216 21:40:04.748033 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:05.274112 master-0 kubenswrapper[38936]: I0216 21:40:05.274021 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 21:40:05.889731 master-0 kubenswrapper[38936]: I0216 21:40:05.889560 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99" path="/var/lib/kubelet/pods/6b6d7c9f-a7b6-4ee8-9973-4bc06ab2cc99/volumes" Feb 16 21:40:06.086907 master-0 kubenswrapper[38936]: I0216 21:40:06.084085 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b78bd09-1fa0-45b3-9a45-08eaeb942824","Type":"ContainerStarted","Data":"5a7fee09e830f71f52afd419e489abd0ea8a1399ccff4fe0345715014037d1a5"} Feb 16 21:40:06.086907 master-0 kubenswrapper[38936]: I0216 21:40:06.084155 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b78bd09-1fa0-45b3-9a45-08eaeb942824","Type":"ContainerStarted","Data":"e87a52d05665f8c1d86523a68737c260d230816fb0596480838618c67a81e74e"} Feb 16 21:40:06.115845 master-0 kubenswrapper[38936]: I0216 21:40:06.114083 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.114062228 podStartE2EDuration="2.114062228s" podCreationTimestamp="2026-02-16 21:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:06.110879184 +0000 UTC m=+1036.462882556" watchObservedRunningTime="2026-02-16 21:40:06.114062228 +0000 UTC m=+1036.466065590" Feb 16 21:40:06.390776 master-0 kubenswrapper[38936]: I0216 21:40:06.390712 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:40:06.395124 master-0 kubenswrapper[38936]: I0216 21:40:06.395024 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:40:06.402676 master-0 kubenswrapper[38936]: I0216 21:40:06.399498 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:40:07.102408 master-0 kubenswrapper[38936]: I0216 21:40:07.102337 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:40:07.710042 master-0 kubenswrapper[38936]: I0216 21:40:07.709988 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:40:07.710539 master-0 kubenswrapper[38936]: I0216 21:40:07.710068 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:40:07.710813 master-0 kubenswrapper[38936]: I0216 21:40:07.710781 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:40:07.710938 master-0 kubenswrapper[38936]: I0216 21:40:07.710922 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:40:07.713019 master-0 kubenswrapper[38936]: I0216 21:40:07.712980 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:40:07.713091 master-0 kubenswrapper[38936]: I0216 21:40:07.713044 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:40:07.989859 master-0 kubenswrapper[38936]: I0216 21:40:07.989646 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5588466b7-6rghh"] Feb 16 21:40:08.041968 master-0 kubenswrapper[38936]: I0216 21:40:08.041642 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5588466b7-6rghh"] Feb 16 21:40:08.042203 master-0 kubenswrapper[38936]: I0216 21:40:08.041937 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.196683 master-0 kubenswrapper[38936]: I0216 21:40:08.191620 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-dns-svc\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.196683 master-0 kubenswrapper[38936]: I0216 21:40:08.191736 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m227n\" (UniqueName: \"kubernetes.io/projected/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-kube-api-access-m227n\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.196683 master-0 kubenswrapper[38936]: I0216 21:40:08.191792 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-dns-swift-storage-0\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.196683 master-0 kubenswrapper[38936]: I0216 21:40:08.191865 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-ovsdbserver-sb\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.196683 master-0 kubenswrapper[38936]: I0216 21:40:08.191911 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-config\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.196683 master-0 kubenswrapper[38936]: I0216 21:40:08.191985 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-ovsdbserver-nb\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.294781 master-0 kubenswrapper[38936]: I0216 21:40:08.294690 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-ovsdbserver-sb\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.295009 master-0 kubenswrapper[38936]: I0216 21:40:08.294792 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-config\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.295009 master-0 kubenswrapper[38936]: I0216 21:40:08.294873 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-ovsdbserver-nb\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.295113 master-0 kubenswrapper[38936]: I0216 21:40:08.295025 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-dns-svc\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.295113 master-0 kubenswrapper[38936]: I0216 21:40:08.295055 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m227n\" (UniqueName: \"kubernetes.io/projected/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-kube-api-access-m227n\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.295113 master-0 kubenswrapper[38936]: I0216 21:40:08.295089 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-dns-swift-storage-0\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.297183 master-0 kubenswrapper[38936]: I0216 21:40:08.297150 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-ovsdbserver-sb\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.298039 master-0 kubenswrapper[38936]: I0216 21:40:08.298010 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-config\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.299344 master-0 kubenswrapper[38936]: I0216 21:40:08.299311 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-ovsdbserver-nb\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.300424 master-0 kubenswrapper[38936]: I0216 21:40:08.300352 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-dns-svc\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.301266 master-0 kubenswrapper[38936]: I0216 21:40:08.301234 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-dns-swift-storage-0\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.326807 master-0 kubenswrapper[38936]: I0216 21:40:08.326708 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m227n\" (UniqueName: \"kubernetes.io/projected/f8a9c8c3-42ca-4670-9d40-55f135ca37c6-kube-api-access-m227n\") pod \"dnsmasq-dns-5588466b7-6rghh\" (UID: \"f8a9c8c3-42ca-4670-9d40-55f135ca37c6\") " pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:08.389560 master-0 kubenswrapper[38936]: I0216 21:40:08.389271 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:09.119678 master-0 kubenswrapper[38936]: I0216 21:40:09.119435 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5588466b7-6rghh"] Feb 16 21:40:09.137682 master-0 kubenswrapper[38936]: I0216 21:40:09.137491 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5588466b7-6rghh" event={"ID":"f8a9c8c3-42ca-4670-9d40-55f135ca37c6","Type":"ContainerStarted","Data":"8b16d76529985f1818e5d1319e6c6fcc1a4267a44d8f7c8c682f99fc37da134b"} Feb 16 21:40:09.748641 master-0 kubenswrapper[38936]: I0216 21:40:09.748573 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:10.151410 master-0 kubenswrapper[38936]: I0216 21:40:10.151259 38936 generic.go:334] "Generic (PLEG): container finished" podID="f8a9c8c3-42ca-4670-9d40-55f135ca37c6" containerID="5072f63d6a2f7b106da19ff9bf238166831f9f6e64e25ad05e13f4b11b3d6bd8" exitCode=0 Feb 16 21:40:10.151410 master-0 kubenswrapper[38936]: I0216 21:40:10.151313 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5588466b7-6rghh" event={"ID":"f8a9c8c3-42ca-4670-9d40-55f135ca37c6","Type":"ContainerDied","Data":"5072f63d6a2f7b106da19ff9bf238166831f9f6e64e25ad05e13f4b11b3d6bd8"} Feb 16 21:40:11.058815 master-0 kubenswrapper[38936]: I0216 21:40:11.058739 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:11.059114 master-0 kubenswrapper[38936]: I0216 21:40:11.058996 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerName="nova-api-log" containerID="cri-o://0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076" gracePeriod=30 Feb 16 21:40:11.059170 master-0 kubenswrapper[38936]: I0216 21:40:11.059103 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerName="nova-api-api" containerID="cri-o://ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1" gracePeriod=30 Feb 16 21:40:11.179307 master-0 kubenswrapper[38936]: I0216 21:40:11.178248 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5588466b7-6rghh" event={"ID":"f8a9c8c3-42ca-4670-9d40-55f135ca37c6","Type":"ContainerStarted","Data":"ee9f0892429517cbfdb4b6c5936770fd96b3d92d5f04a943ab6c3281bb2f698f"} Feb 16 21:40:11.180300 master-0 kubenswrapper[38936]: I0216 21:40:11.180250 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:11.210288 master-0 kubenswrapper[38936]: I0216 21:40:11.210180 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5588466b7-6rghh" podStartSLOduration=4.210153975 podStartE2EDuration="4.210153975s" podCreationTimestamp="2026-02-16 21:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:11.208231403 +0000 UTC m=+1041.560234765" watchObservedRunningTime="2026-02-16 21:40:11.210153975 +0000 UTC m=+1041.562157337" Feb 16 21:40:12.192194 master-0 kubenswrapper[38936]: I0216 21:40:12.192122 38936 generic.go:334] "Generic (PLEG): container finished" podID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerID="0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076" exitCode=143 Feb 16 21:40:12.192794 master-0 kubenswrapper[38936]: I0216 21:40:12.192206 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2535ec49-180c-46b4-85b5-cfe29e147b92","Type":"ContainerDied","Data":"0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076"} Feb 16 21:40:14.748269 master-0 kubenswrapper[38936]: I0216 21:40:14.748211 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:14.752082 master-0 kubenswrapper[38936]: I0216 21:40:14.752048 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:40:14.768567 master-0 kubenswrapper[38936]: I0216 21:40:14.768511 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:14.919995 master-0 kubenswrapper[38936]: I0216 21:40:14.919915 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2535ec49-180c-46b4-85b5-cfe29e147b92-logs\") pod \"2535ec49-180c-46b4-85b5-cfe29e147b92\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " Feb 16 21:40:14.919995 master-0 kubenswrapper[38936]: I0216 21:40:14.919995 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-combined-ca-bundle\") pod \"2535ec49-180c-46b4-85b5-cfe29e147b92\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " Feb 16 21:40:14.920286 master-0 kubenswrapper[38936]: I0216 21:40:14.920050 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfnx9\" (UniqueName: \"kubernetes.io/projected/2535ec49-180c-46b4-85b5-cfe29e147b92-kube-api-access-bfnx9\") pod \"2535ec49-180c-46b4-85b5-cfe29e147b92\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " Feb 16 21:40:14.920286 master-0 kubenswrapper[38936]: I0216 21:40:14.920171 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-config-data\") pod \"2535ec49-180c-46b4-85b5-cfe29e147b92\" (UID: \"2535ec49-180c-46b4-85b5-cfe29e147b92\") " Feb 16 21:40:14.924339 master-0 kubenswrapper[38936]: I0216 21:40:14.924299 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2535ec49-180c-46b4-85b5-cfe29e147b92-logs" (OuterVolumeSpecName: "logs") pod "2535ec49-180c-46b4-85b5-cfe29e147b92" (UID: "2535ec49-180c-46b4-85b5-cfe29e147b92"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:40:14.927645 master-0 kubenswrapper[38936]: I0216 21:40:14.927585 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2535ec49-180c-46b4-85b5-cfe29e147b92-kube-api-access-bfnx9" (OuterVolumeSpecName: "kube-api-access-bfnx9") pod "2535ec49-180c-46b4-85b5-cfe29e147b92" (UID: "2535ec49-180c-46b4-85b5-cfe29e147b92"). InnerVolumeSpecName "kube-api-access-bfnx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:14.968790 master-0 kubenswrapper[38936]: I0216 21:40:14.968519 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2535ec49-180c-46b4-85b5-cfe29e147b92" (UID: "2535ec49-180c-46b4-85b5-cfe29e147b92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:14.970883 master-0 kubenswrapper[38936]: I0216 21:40:14.970426 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-config-data" (OuterVolumeSpecName: "config-data") pod "2535ec49-180c-46b4-85b5-cfe29e147b92" (UID: "2535ec49-180c-46b4-85b5-cfe29e147b92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:15.023668 master-0 kubenswrapper[38936]: I0216 21:40:15.023513 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2535ec49-180c-46b4-85b5-cfe29e147b92-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:15.023668 master-0 kubenswrapper[38936]: I0216 21:40:15.023574 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:15.023668 master-0 kubenswrapper[38936]: I0216 21:40:15.023595 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfnx9\" (UniqueName: \"kubernetes.io/projected/2535ec49-180c-46b4-85b5-cfe29e147b92-kube-api-access-bfnx9\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:15.023668 master-0 kubenswrapper[38936]: I0216 21:40:15.023607 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2535ec49-180c-46b4-85b5-cfe29e147b92-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:15.226979 master-0 kubenswrapper[38936]: I0216 21:40:15.226849 38936 generic.go:334] "Generic (PLEG): container finished" podID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerID="ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1" exitCode=0 Feb 16 21:40:15.226979 master-0 kubenswrapper[38936]: I0216 21:40:15.226947 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2535ec49-180c-46b4-85b5-cfe29e147b92","Type":"ContainerDied","Data":"ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1"} Feb 16 21:40:15.227209 master-0 kubenswrapper[38936]: I0216 21:40:15.227012 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2535ec49-180c-46b4-85b5-cfe29e147b92","Type":"ContainerDied","Data":"1a394e157d98e573c2ff4f8bbaded8727a81ee45af1cb2ea6bb8ea58f4cba9ee"} Feb 16 21:40:15.227209 master-0 kubenswrapper[38936]: I0216 21:40:15.227033 38936 scope.go:117] "RemoveContainer" containerID="ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1" Feb 16 21:40:15.227407 master-0 kubenswrapper[38936]: I0216 21:40:15.227372 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:40:15.242982 master-0 kubenswrapper[38936]: I0216 21:40:15.242913 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 21:40:15.261639 master-0 kubenswrapper[38936]: I0216 21:40:15.261565 38936 scope.go:117] "RemoveContainer" containerID="0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076" Feb 16 21:40:15.276610 master-0 kubenswrapper[38936]: I0216 21:40:15.276533 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:15.307018 master-0 kubenswrapper[38936]: I0216 21:40:15.304640 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:15.320291 master-0 kubenswrapper[38936]: I0216 21:40:15.311385 38936 scope.go:117] "RemoveContainer" containerID="ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1" Feb 16 21:40:15.320291 master-0 kubenswrapper[38936]: E0216 21:40:15.312111 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1\": container with ID starting with ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1 not found: ID does not exist" containerID="ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1" Feb 16 21:40:15.320291 master-0 kubenswrapper[38936]: I0216 21:40:15.312163 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1"} err="failed to get container status \"ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1\": rpc error: code = NotFound desc = could not find container \"ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1\": container with ID starting with ffc9eb81379713d640ef805bd21e7eaa6cd84f7c8e4a5ff628ff48cfe0e4c1e1 not found: ID does not exist" Feb 16 21:40:15.320291 master-0 kubenswrapper[38936]: I0216 21:40:15.312193 38936 scope.go:117] "RemoveContainer" containerID="0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076" Feb 16 21:40:15.320291 master-0 kubenswrapper[38936]: E0216 21:40:15.312623 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076\": container with ID starting with 0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076 not found: ID does not exist" containerID="0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076" Feb 16 21:40:15.320291 master-0 kubenswrapper[38936]: I0216 21:40:15.312665 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076"} err="failed to get container status \"0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076\": rpc error: code = NotFound desc = could not find container \"0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076\": container with ID starting with 0a89608b03ae3c85cd9588fd90384e66af6fc4f5cb470cbfbc36b9fbca504076 not found: ID does not exist" Feb 16 21:40:15.332738 master-0 kubenswrapper[38936]: I0216 21:40:15.332696 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:15.333461 master-0 kubenswrapper[38936]: E0216 21:40:15.333433 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerName="nova-api-log" Feb 16 21:40:15.333527 master-0 kubenswrapper[38936]: I0216 21:40:15.333464 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerName="nova-api-log" Feb 16 21:40:15.333562 master-0 kubenswrapper[38936]: E0216 21:40:15.333551 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerName="nova-api-api" Feb 16 21:40:15.333602 master-0 kubenswrapper[38936]: I0216 21:40:15.333562 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerName="nova-api-api" Feb 16 21:40:15.334881 master-0 kubenswrapper[38936]: I0216 21:40:15.334434 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerName="nova-api-log" Feb 16 21:40:15.334881 master-0 kubenswrapper[38936]: I0216 21:40:15.334476 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="2535ec49-180c-46b4-85b5-cfe29e147b92" containerName="nova-api-api" Feb 16 21:40:15.338273 master-0 kubenswrapper[38936]: I0216 21:40:15.338148 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:40:15.341963 master-0 kubenswrapper[38936]: I0216 21:40:15.341521 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 21:40:15.341963 master-0 kubenswrapper[38936]: I0216 21:40:15.341816 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 21:40:15.342759 master-0 kubenswrapper[38936]: I0216 21:40:15.342727 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:40:15.366004 master-0 kubenswrapper[38936]: I0216 21:40:15.365934 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:15.432521 master-0 kubenswrapper[38936]: I0216 21:40:15.432454 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-public-tls-certs\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.432521 master-0 kubenswrapper[38936]: I0216 21:40:15.432528 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-config-data\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.432854 master-0 kubenswrapper[38936]: I0216 21:40:15.432681 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.432854 master-0 kubenswrapper[38936]: I0216 21:40:15.432725 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.433186 master-0 kubenswrapper[38936]: I0216 21:40:15.433049 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/308eef78-6d8a-40fe-8416-51efa88d39fc-logs\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.433186 master-0 kubenswrapper[38936]: I0216 21:40:15.433174 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkbb\" (UniqueName: \"kubernetes.io/projected/308eef78-6d8a-40fe-8416-51efa88d39fc-kube-api-access-5gkbb\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.535683 master-0 kubenswrapper[38936]: I0216 21:40:15.535611 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-public-tls-certs\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.535972 master-0 kubenswrapper[38936]: I0216 21:40:15.535692 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-config-data\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.538081 master-0 kubenswrapper[38936]: I0216 21:40:15.538031 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.538609 master-0 kubenswrapper[38936]: I0216 21:40:15.538252 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.538609 master-0 kubenswrapper[38936]: I0216 21:40:15.538533 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/308eef78-6d8a-40fe-8416-51efa88d39fc-logs\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.538783 master-0 kubenswrapper[38936]: I0216 21:40:15.538636 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkbb\" (UniqueName: \"kubernetes.io/projected/308eef78-6d8a-40fe-8416-51efa88d39fc-kube-api-access-5gkbb\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.539158 master-0 kubenswrapper[38936]: I0216 21:40:15.539124 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/308eef78-6d8a-40fe-8416-51efa88d39fc-logs\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.540839 master-0 kubenswrapper[38936]: I0216 21:40:15.540794 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-public-tls-certs\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.541239 master-0 kubenswrapper[38936]: I0216 21:40:15.541186 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.542543 master-0 kubenswrapper[38936]: I0216 21:40:15.542489 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.543161 master-0 kubenswrapper[38936]: I0216 21:40:15.543111 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-config-data\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.565166 master-0 kubenswrapper[38936]: I0216 21:40:15.565114 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkbb\" (UniqueName: \"kubernetes.io/projected/308eef78-6d8a-40fe-8416-51efa88d39fc-kube-api-access-5gkbb\") pod \"nova-api-0\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " pod="openstack/nova-api-0" Feb 16 21:40:15.600982 master-0 kubenswrapper[38936]: I0216 21:40:15.600914 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-p7jjg"] Feb 16 21:40:15.604104 master-0 kubenswrapper[38936]: I0216 21:40:15.603471 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.608034 master-0 kubenswrapper[38936]: I0216 21:40:15.607636 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 21:40:15.609874 master-0 kubenswrapper[38936]: I0216 21:40:15.608219 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 21:40:15.622619 master-0 kubenswrapper[38936]: I0216 21:40:15.622557 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p7jjg"] Feb 16 21:40:15.677765 master-0 kubenswrapper[38936]: I0216 21:40:15.674704 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:40:15.706223 master-0 kubenswrapper[38936]: I0216 21:40:15.706137 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-wrm7p"] Feb 16 21:40:15.711782 master-0 kubenswrapper[38936]: I0216 21:40:15.711708 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.746400 master-0 kubenswrapper[38936]: I0216 21:40:15.745303 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-config-data\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.746400 master-0 kubenswrapper[38936]: I0216 21:40:15.745388 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.746400 master-0 kubenswrapper[38936]: I0216 21:40:15.745419 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-scripts\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.746400 master-0 kubenswrapper[38936]: I0216 21:40:15.745458 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv68v\" (UniqueName: \"kubernetes.io/projected/bca15ca0-b308-47ca-ad27-856b6b2d928e-kube-api-access-wv68v\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.747476 master-0 kubenswrapper[38936]: I0216 21:40:15.747377 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-wrm7p"] Feb 16 21:40:15.848011 master-0 kubenswrapper[38936]: I0216 21:40:15.847928 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-config-data\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.848011 master-0 kubenswrapper[38936]: I0216 21:40:15.848013 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-combined-ca-bundle\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.848744 master-0 kubenswrapper[38936]: I0216 21:40:15.848045 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-scripts\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.848744 master-0 kubenswrapper[38936]: I0216 21:40:15.848099 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-config-data\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.848744 master-0 kubenswrapper[38936]: I0216 21:40:15.848121 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbw75\" (UniqueName: \"kubernetes.io/projected/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-kube-api-access-bbw75\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.848744 master-0 kubenswrapper[38936]: I0216 21:40:15.848151 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.848744 master-0 kubenswrapper[38936]: I0216 21:40:15.848176 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-scripts\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.848744 master-0 kubenswrapper[38936]: I0216 21:40:15.848208 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv68v\" (UniqueName: \"kubernetes.io/projected/bca15ca0-b308-47ca-ad27-856b6b2d928e-kube-api-access-wv68v\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.855999 master-0 kubenswrapper[38936]: I0216 21:40:15.854444 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.855999 master-0 kubenswrapper[38936]: I0216 21:40:15.855041 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-config-data\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.886254 master-0 kubenswrapper[38936]: I0216 21:40:15.866514 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-scripts\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.886254 master-0 kubenswrapper[38936]: I0216 21:40:15.880446 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv68v\" (UniqueName: \"kubernetes.io/projected/bca15ca0-b308-47ca-ad27-856b6b2d928e-kube-api-access-wv68v\") pod \"nova-cell1-cell-mapping-p7jjg\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:15.909677 master-0 kubenswrapper[38936]: I0216 21:40:15.909593 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2535ec49-180c-46b4-85b5-cfe29e147b92" path="/var/lib/kubelet/pods/2535ec49-180c-46b4-85b5-cfe29e147b92/volumes" Feb 16 21:40:15.950523 master-0 kubenswrapper[38936]: I0216 21:40:15.950441 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbw75\" (UniqueName: \"kubernetes.io/projected/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-kube-api-access-bbw75\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.953275 master-0 kubenswrapper[38936]: I0216 21:40:15.953200 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-config-data\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.953522 master-0 kubenswrapper[38936]: I0216 21:40:15.953304 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-combined-ca-bundle\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.953522 master-0 kubenswrapper[38936]: I0216 21:40:15.953464 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-scripts\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.957507 master-0 kubenswrapper[38936]: I0216 21:40:15.957475 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-scripts\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.960936 master-0 kubenswrapper[38936]: I0216 21:40:15.960882 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-combined-ca-bundle\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.966097 master-0 kubenswrapper[38936]: I0216 21:40:15.966032 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-config-data\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:15.969337 master-0 kubenswrapper[38936]: I0216 21:40:15.969286 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbw75\" (UniqueName: \"kubernetes.io/projected/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-kube-api-access-bbw75\") pod \"nova-cell1-host-discover-wrm7p\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:16.108464 master-0 kubenswrapper[38936]: I0216 21:40:16.108285 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:16.133240 master-0 kubenswrapper[38936]: I0216 21:40:16.133168 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:16.241874 master-0 kubenswrapper[38936]: I0216 21:40:16.241081 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:16.254407 master-0 kubenswrapper[38936]: W0216 21:40:16.254314 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod308eef78_6d8a_40fe_8416_51efa88d39fc.slice/crio-ba3d0d98279d6d94c7e4577e621b00440426e2a3436e95fbaf33c8b832e2c744 WatchSource:0}: Error finding container ba3d0d98279d6d94c7e4577e621b00440426e2a3436e95fbaf33c8b832e2c744: Status 404 returned error can't find the container with id ba3d0d98279d6d94c7e4577e621b00440426e2a3436e95fbaf33c8b832e2c744 Feb 16 21:40:16.671435 master-0 kubenswrapper[38936]: I0216 21:40:16.671360 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p7jjg"] Feb 16 21:40:16.798160 master-0 kubenswrapper[38936]: W0216 21:40:16.798105 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbf44af2_ce58_40e0_af03_ee3c9bcc1519.slice/crio-eccdf9ea2fef0f80e88217c5a1865649b2058d0ffd64a0009fb2602ea7a0a8a1 WatchSource:0}: Error finding container eccdf9ea2fef0f80e88217c5a1865649b2058d0ffd64a0009fb2602ea7a0a8a1: Status 404 returned error can't find the container with id eccdf9ea2fef0f80e88217c5a1865649b2058d0ffd64a0009fb2602ea7a0a8a1 Feb 16 21:40:16.800835 master-0 kubenswrapper[38936]: I0216 21:40:16.800761 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-wrm7p"] Feb 16 21:40:17.261887 master-0 kubenswrapper[38936]: I0216 21:40:17.261816 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"308eef78-6d8a-40fe-8416-51efa88d39fc","Type":"ContainerStarted","Data":"f6ab133b1a2730073efb76f7e0263a4ac9f8bd6f9dfeccda0119e22e0ec4fe86"} Feb 16 21:40:17.261887 master-0 kubenswrapper[38936]: I0216 21:40:17.261888 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"308eef78-6d8a-40fe-8416-51efa88d39fc","Type":"ContainerStarted","Data":"e49f840d110cc59900dd965558a9f83d1ad7ba3b269e9d6e2f6a2da76900ae29"} Feb 16 21:40:17.262473 master-0 kubenswrapper[38936]: I0216 21:40:17.261901 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"308eef78-6d8a-40fe-8416-51efa88d39fc","Type":"ContainerStarted","Data":"ba3d0d98279d6d94c7e4577e621b00440426e2a3436e95fbaf33c8b832e2c744"} Feb 16 21:40:17.263826 master-0 kubenswrapper[38936]: I0216 21:40:17.263781 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-wrm7p" event={"ID":"fbf44af2-ce58-40e0-af03-ee3c9bcc1519","Type":"ContainerStarted","Data":"77a439020c0ea518cdeba9c691d3498079fef606e6c93dc5eae9ccc4543722c3"} Feb 16 21:40:17.263958 master-0 kubenswrapper[38936]: I0216 21:40:17.263851 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-wrm7p" event={"ID":"fbf44af2-ce58-40e0-af03-ee3c9bcc1519","Type":"ContainerStarted","Data":"eccdf9ea2fef0f80e88217c5a1865649b2058d0ffd64a0009fb2602ea7a0a8a1"} Feb 16 21:40:17.265613 master-0 kubenswrapper[38936]: I0216 21:40:17.265533 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p7jjg" event={"ID":"bca15ca0-b308-47ca-ad27-856b6b2d928e","Type":"ContainerStarted","Data":"0c53d56f9145b098b1c210595b9ca11f3176a6b5489ef97355a44bd43ab3f516"} Feb 16 21:40:17.265613 master-0 kubenswrapper[38936]: I0216 21:40:17.265590 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p7jjg" event={"ID":"bca15ca0-b308-47ca-ad27-856b6b2d928e","Type":"ContainerStarted","Data":"34f16279cd50248e3c2fdf65c1527a858bcdd23d8d1770047eb3e695cbb0dbfe"} Feb 16 21:40:17.299011 master-0 kubenswrapper[38936]: I0216 21:40:17.298908 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.298864194 podStartE2EDuration="2.298864194s" podCreationTimestamp="2026-02-16 21:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:17.289522615 +0000 UTC m=+1047.641525997" watchObservedRunningTime="2026-02-16 21:40:17.298864194 +0000 UTC m=+1047.650867556" Feb 16 21:40:17.340599 master-0 kubenswrapper[38936]: I0216 21:40:17.340492 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-wrm7p" podStartSLOduration=2.340463495 podStartE2EDuration="2.340463495s" podCreationTimestamp="2026-02-16 21:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:17.324800757 +0000 UTC m=+1047.676804119" watchObservedRunningTime="2026-02-16 21:40:17.340463495 +0000 UTC m=+1047.692466857" Feb 16 21:40:17.361705 master-0 kubenswrapper[38936]: I0216 21:40:17.361586 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-p7jjg" podStartSLOduration=2.361566479 podStartE2EDuration="2.361566479s" podCreationTimestamp="2026-02-16 21:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:17.353015531 +0000 UTC m=+1047.705018893" watchObservedRunningTime="2026-02-16 21:40:17.361566479 +0000 UTC m=+1047.713569841" Feb 16 21:40:18.391918 master-0 kubenswrapper[38936]: I0216 21:40:18.391843 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5588466b7-6rghh" Feb 16 21:40:18.520341 master-0 kubenswrapper[38936]: I0216 21:40:18.520266 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-846fc68895-n6hmv"] Feb 16 21:40:18.520637 master-0 kubenswrapper[38936]: I0216 21:40:18.520590 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" podUID="b96444d1-ae55-4560-a3a6-b75072a1271f" containerName="dnsmasq-dns" containerID="cri-o://3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9" gracePeriod=10 Feb 16 21:40:19.248329 master-0 kubenswrapper[38936]: I0216 21:40:19.248259 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:40:19.301564 master-0 kubenswrapper[38936]: I0216 21:40:19.301437 38936 generic.go:334] "Generic (PLEG): container finished" podID="b96444d1-ae55-4560-a3a6-b75072a1271f" containerID="3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9" exitCode=0 Feb 16 21:40:19.301564 master-0 kubenswrapper[38936]: I0216 21:40:19.301499 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" Feb 16 21:40:19.301564 master-0 kubenswrapper[38936]: I0216 21:40:19.301505 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" event={"ID":"b96444d1-ae55-4560-a3a6-b75072a1271f","Type":"ContainerDied","Data":"3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9"} Feb 16 21:40:19.301861 master-0 kubenswrapper[38936]: I0216 21:40:19.301642 38936 scope.go:117] "RemoveContainer" containerID="3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9" Feb 16 21:40:19.301861 master-0 kubenswrapper[38936]: I0216 21:40:19.301736 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-846fc68895-n6hmv" event={"ID":"b96444d1-ae55-4560-a3a6-b75072a1271f","Type":"ContainerDied","Data":"afc7db148ac623c2f22708c190b1060eec6a127b0446b9f5504c5861a7677ffa"} Feb 16 21:40:19.341643 master-0 kubenswrapper[38936]: I0216 21:40:19.341593 38936 scope.go:117] "RemoveContainer" containerID="e70ce9c3dfc993acf95be182d1a9ec783e77d96e6f7a06bb96f10f9f63ca467f" Feb 16 21:40:19.379485 master-0 kubenswrapper[38936]: I0216 21:40:19.374513 38936 scope.go:117] "RemoveContainer" containerID="3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9" Feb 16 21:40:19.380772 master-0 kubenswrapper[38936]: E0216 21:40:19.380615 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9\": container with ID starting with 3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9 not found: ID does not exist" containerID="3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9" Feb 16 21:40:19.380772 master-0 kubenswrapper[38936]: I0216 21:40:19.380714 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9"} err="failed to get container status \"3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9\": rpc error: code = NotFound desc = could not find container \"3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9\": container with ID starting with 3a4c82f46061e6f1a49f165d479ce2f703cdace112fa05a1d2da4c2b2d5611b9 not found: ID does not exist" Feb 16 21:40:19.380772 master-0 kubenswrapper[38936]: I0216 21:40:19.380752 38936 scope.go:117] "RemoveContainer" containerID="e70ce9c3dfc993acf95be182d1a9ec783e77d96e6f7a06bb96f10f9f63ca467f" Feb 16 21:40:19.381505 master-0 kubenswrapper[38936]: E0216 21:40:19.381457 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70ce9c3dfc993acf95be182d1a9ec783e77d96e6f7a06bb96f10f9f63ca467f\": container with ID starting with e70ce9c3dfc993acf95be182d1a9ec783e77d96e6f7a06bb96f10f9f63ca467f not found: ID does not exist" containerID="e70ce9c3dfc993acf95be182d1a9ec783e77d96e6f7a06bb96f10f9f63ca467f" Feb 16 21:40:19.381592 master-0 kubenswrapper[38936]: I0216 21:40:19.381518 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70ce9c3dfc993acf95be182d1a9ec783e77d96e6f7a06bb96f10f9f63ca467f"} err="failed to get container status \"e70ce9c3dfc993acf95be182d1a9ec783e77d96e6f7a06bb96f10f9f63ca467f\": rpc error: code = NotFound desc = could not find container \"e70ce9c3dfc993acf95be182d1a9ec783e77d96e6f7a06bb96f10f9f63ca467f\": container with ID starting with e70ce9c3dfc993acf95be182d1a9ec783e77d96e6f7a06bb96f10f9f63ca467f not found: ID does not exist" Feb 16 21:40:19.420939 master-0 kubenswrapper[38936]: I0216 21:40:19.420866 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-svc\") pod \"b96444d1-ae55-4560-a3a6-b75072a1271f\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " Feb 16 21:40:19.421687 master-0 kubenswrapper[38936]: I0216 21:40:19.420993 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-sb\") pod \"b96444d1-ae55-4560-a3a6-b75072a1271f\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " Feb 16 21:40:19.421687 master-0 kubenswrapper[38936]: I0216 21:40:19.421288 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-swift-storage-0\") pod \"b96444d1-ae55-4560-a3a6-b75072a1271f\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " Feb 16 21:40:19.421687 master-0 kubenswrapper[38936]: I0216 21:40:19.421339 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9ppd\" (UniqueName: \"kubernetes.io/projected/b96444d1-ae55-4560-a3a6-b75072a1271f-kube-api-access-f9ppd\") pod \"b96444d1-ae55-4560-a3a6-b75072a1271f\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " Feb 16 21:40:19.421687 master-0 kubenswrapper[38936]: I0216 21:40:19.421524 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-config\") pod \"b96444d1-ae55-4560-a3a6-b75072a1271f\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " Feb 16 21:40:19.421687 master-0 kubenswrapper[38936]: I0216 21:40:19.421579 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-nb\") pod \"b96444d1-ae55-4560-a3a6-b75072a1271f\" (UID: \"b96444d1-ae55-4560-a3a6-b75072a1271f\") " Feb 16 21:40:19.425677 master-0 kubenswrapper[38936]: I0216 21:40:19.425576 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96444d1-ae55-4560-a3a6-b75072a1271f-kube-api-access-f9ppd" (OuterVolumeSpecName: "kube-api-access-f9ppd") pod "b96444d1-ae55-4560-a3a6-b75072a1271f" (UID: "b96444d1-ae55-4560-a3a6-b75072a1271f"). InnerVolumeSpecName "kube-api-access-f9ppd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:19.483945 master-0 kubenswrapper[38936]: I0216 21:40:19.483823 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b96444d1-ae55-4560-a3a6-b75072a1271f" (UID: "b96444d1-ae55-4560-a3a6-b75072a1271f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:40:19.487906 master-0 kubenswrapper[38936]: I0216 21:40:19.487732 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b96444d1-ae55-4560-a3a6-b75072a1271f" (UID: "b96444d1-ae55-4560-a3a6-b75072a1271f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:40:19.499299 master-0 kubenswrapper[38936]: I0216 21:40:19.499233 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-config" (OuterVolumeSpecName: "config") pod "b96444d1-ae55-4560-a3a6-b75072a1271f" (UID: "b96444d1-ae55-4560-a3a6-b75072a1271f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:40:19.508573 master-0 kubenswrapper[38936]: I0216 21:40:19.508141 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b96444d1-ae55-4560-a3a6-b75072a1271f" (UID: "b96444d1-ae55-4560-a3a6-b75072a1271f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:40:19.520758 master-0 kubenswrapper[38936]: I0216 21:40:19.512633 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b96444d1-ae55-4560-a3a6-b75072a1271f" (UID: "b96444d1-ae55-4560-a3a6-b75072a1271f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:40:19.525365 master-0 kubenswrapper[38936]: I0216 21:40:19.525303 38936 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:19.525365 master-0 kubenswrapper[38936]: I0216 21:40:19.525363 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9ppd\" (UniqueName: \"kubernetes.io/projected/b96444d1-ae55-4560-a3a6-b75072a1271f-kube-api-access-f9ppd\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:19.525365 master-0 kubenswrapper[38936]: I0216 21:40:19.525379 38936 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:19.525706 master-0 kubenswrapper[38936]: I0216 21:40:19.525396 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:19.525706 master-0 kubenswrapper[38936]: I0216 21:40:19.525409 38936 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:19.525706 master-0 kubenswrapper[38936]: I0216 21:40:19.525419 38936 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b96444d1-ae55-4560-a3a6-b75072a1271f-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:19.741902 master-0 kubenswrapper[38936]: I0216 21:40:19.741842 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-846fc68895-n6hmv"] Feb 16 21:40:19.905493 master-0 kubenswrapper[38936]: I0216 21:40:19.905347 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-846fc68895-n6hmv"] Feb 16 21:40:20.316668 master-0 kubenswrapper[38936]: I0216 21:40:20.316586 38936 generic.go:334] "Generic (PLEG): container finished" podID="fbf44af2-ce58-40e0-af03-ee3c9bcc1519" containerID="77a439020c0ea518cdeba9c691d3498079fef606e6c93dc5eae9ccc4543722c3" exitCode=0 Feb 16 21:40:20.316923 master-0 kubenswrapper[38936]: I0216 21:40:20.316675 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-wrm7p" event={"ID":"fbf44af2-ce58-40e0-af03-ee3c9bcc1519","Type":"ContainerDied","Data":"77a439020c0ea518cdeba9c691d3498079fef606e6c93dc5eae9ccc4543722c3"} Feb 16 21:40:21.846265 master-0 kubenswrapper[38936]: I0216 21:40:21.844007 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:21.893143 master-0 kubenswrapper[38936]: I0216 21:40:21.893087 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96444d1-ae55-4560-a3a6-b75072a1271f" path="/var/lib/kubelet/pods/b96444d1-ae55-4560-a3a6-b75072a1271f/volumes" Feb 16 21:40:22.036939 master-0 kubenswrapper[38936]: I0216 21:40:22.036158 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbw75\" (UniqueName: \"kubernetes.io/projected/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-kube-api-access-bbw75\") pod \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " Feb 16 21:40:22.036939 master-0 kubenswrapper[38936]: I0216 21:40:22.036274 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-combined-ca-bundle\") pod \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " Feb 16 21:40:22.036939 master-0 kubenswrapper[38936]: I0216 21:40:22.036392 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-config-data\") pod \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " Feb 16 21:40:22.036939 master-0 kubenswrapper[38936]: I0216 21:40:22.036509 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-scripts\") pod \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\" (UID: \"fbf44af2-ce58-40e0-af03-ee3c9bcc1519\") " Feb 16 21:40:22.039599 master-0 kubenswrapper[38936]: I0216 21:40:22.039560 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-kube-api-access-bbw75" (OuterVolumeSpecName: "kube-api-access-bbw75") pod "fbf44af2-ce58-40e0-af03-ee3c9bcc1519" (UID: "fbf44af2-ce58-40e0-af03-ee3c9bcc1519"). InnerVolumeSpecName "kube-api-access-bbw75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:22.041319 master-0 kubenswrapper[38936]: I0216 21:40:22.041260 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-scripts" (OuterVolumeSpecName: "scripts") pod "fbf44af2-ce58-40e0-af03-ee3c9bcc1519" (UID: "fbf44af2-ce58-40e0-af03-ee3c9bcc1519"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:22.074909 master-0 kubenswrapper[38936]: I0216 21:40:22.074838 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbf44af2-ce58-40e0-af03-ee3c9bcc1519" (UID: "fbf44af2-ce58-40e0-af03-ee3c9bcc1519"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:22.079839 master-0 kubenswrapper[38936]: I0216 21:40:22.079748 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-config-data" (OuterVolumeSpecName: "config-data") pod "fbf44af2-ce58-40e0-af03-ee3c9bcc1519" (UID: "fbf44af2-ce58-40e0-af03-ee3c9bcc1519"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:22.140252 master-0 kubenswrapper[38936]: I0216 21:40:22.140170 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbw75\" (UniqueName: \"kubernetes.io/projected/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-kube-api-access-bbw75\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:22.140252 master-0 kubenswrapper[38936]: I0216 21:40:22.140220 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:22.140252 master-0 kubenswrapper[38936]: I0216 21:40:22.140230 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:22.140252 master-0 kubenswrapper[38936]: I0216 21:40:22.140239 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbf44af2-ce58-40e0-af03-ee3c9bcc1519-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:22.343619 master-0 kubenswrapper[38936]: I0216 21:40:22.343079 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-wrm7p" event={"ID":"fbf44af2-ce58-40e0-af03-ee3c9bcc1519","Type":"ContainerDied","Data":"eccdf9ea2fef0f80e88217c5a1865649b2058d0ffd64a0009fb2602ea7a0a8a1"} Feb 16 21:40:22.343619 master-0 kubenswrapper[38936]: I0216 21:40:22.343156 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eccdf9ea2fef0f80e88217c5a1865649b2058d0ffd64a0009fb2602ea7a0a8a1" Feb 16 21:40:22.343619 master-0 kubenswrapper[38936]: I0216 21:40:22.343102 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-wrm7p" Feb 16 21:40:22.348365 master-0 kubenswrapper[38936]: I0216 21:40:22.348309 38936 generic.go:334] "Generic (PLEG): container finished" podID="bca15ca0-b308-47ca-ad27-856b6b2d928e" containerID="0c53d56f9145b098b1c210595b9ca11f3176a6b5489ef97355a44bd43ab3f516" exitCode=0 Feb 16 21:40:22.348365 master-0 kubenswrapper[38936]: I0216 21:40:22.348365 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p7jjg" event={"ID":"bca15ca0-b308-47ca-ad27-856b6b2d928e","Type":"ContainerDied","Data":"0c53d56f9145b098b1c210595b9ca11f3176a6b5489ef97355a44bd43ab3f516"} Feb 16 21:40:23.887871 master-0 kubenswrapper[38936]: I0216 21:40:23.887820 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:24.001160 master-0 kubenswrapper[38936]: I0216 21:40:24.001080 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-config-data\") pod \"bca15ca0-b308-47ca-ad27-856b6b2d928e\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " Feb 16 21:40:24.001446 master-0 kubenswrapper[38936]: I0216 21:40:24.001218 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv68v\" (UniqueName: \"kubernetes.io/projected/bca15ca0-b308-47ca-ad27-856b6b2d928e-kube-api-access-wv68v\") pod \"bca15ca0-b308-47ca-ad27-856b6b2d928e\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " Feb 16 21:40:24.001446 master-0 kubenswrapper[38936]: I0216 21:40:24.001383 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-scripts\") pod \"bca15ca0-b308-47ca-ad27-856b6b2d928e\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " Feb 16 21:40:24.001446 master-0 kubenswrapper[38936]: I0216 21:40:24.001428 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-combined-ca-bundle\") pod \"bca15ca0-b308-47ca-ad27-856b6b2d928e\" (UID: \"bca15ca0-b308-47ca-ad27-856b6b2d928e\") " Feb 16 21:40:24.005109 master-0 kubenswrapper[38936]: I0216 21:40:24.005037 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-scripts" (OuterVolumeSpecName: "scripts") pod "bca15ca0-b308-47ca-ad27-856b6b2d928e" (UID: "bca15ca0-b308-47ca-ad27-856b6b2d928e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:24.008183 master-0 kubenswrapper[38936]: I0216 21:40:24.008117 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca15ca0-b308-47ca-ad27-856b6b2d928e-kube-api-access-wv68v" (OuterVolumeSpecName: "kube-api-access-wv68v") pod "bca15ca0-b308-47ca-ad27-856b6b2d928e" (UID: "bca15ca0-b308-47ca-ad27-856b6b2d928e"). InnerVolumeSpecName "kube-api-access-wv68v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:24.039217 master-0 kubenswrapper[38936]: I0216 21:40:24.039070 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bca15ca0-b308-47ca-ad27-856b6b2d928e" (UID: "bca15ca0-b308-47ca-ad27-856b6b2d928e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:24.045729 master-0 kubenswrapper[38936]: I0216 21:40:24.045661 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-config-data" (OuterVolumeSpecName: "config-data") pod "bca15ca0-b308-47ca-ad27-856b6b2d928e" (UID: "bca15ca0-b308-47ca-ad27-856b6b2d928e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:24.106085 master-0 kubenswrapper[38936]: I0216 21:40:24.106026 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:24.106085 master-0 kubenswrapper[38936]: I0216 21:40:24.106071 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv68v\" (UniqueName: \"kubernetes.io/projected/bca15ca0-b308-47ca-ad27-856b6b2d928e-kube-api-access-wv68v\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:24.106085 master-0 kubenswrapper[38936]: I0216 21:40:24.106086 38936 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:24.106328 master-0 kubenswrapper[38936]: I0216 21:40:24.106095 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca15ca0-b308-47ca-ad27-856b6b2d928e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:24.373202 master-0 kubenswrapper[38936]: I0216 21:40:24.373059 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p7jjg" event={"ID":"bca15ca0-b308-47ca-ad27-856b6b2d928e","Type":"ContainerDied","Data":"34f16279cd50248e3c2fdf65c1527a858bcdd23d8d1770047eb3e695cbb0dbfe"} Feb 16 21:40:24.373202 master-0 kubenswrapper[38936]: I0216 21:40:24.373112 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34f16279cd50248e3c2fdf65c1527a858bcdd23d8d1770047eb3e695cbb0dbfe" Feb 16 21:40:24.373202 master-0 kubenswrapper[38936]: I0216 21:40:24.373162 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p7jjg" Feb 16 21:40:24.639946 master-0 kubenswrapper[38936]: I0216 21:40:24.639779 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:40:24.640181 master-0 kubenswrapper[38936]: I0216 21:40:24.640030 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="90445da2-719c-482a-ab08-8ee50a317377" containerName="nova-scheduler-scheduler" containerID="cri-o://beeb3366c40b7904cbed4ed0893c846080f4c4a6c57567b11352e42c3d6a5646" gracePeriod=30 Feb 16 21:40:24.667422 master-0 kubenswrapper[38936]: I0216 21:40:24.667323 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:24.667676 master-0 kubenswrapper[38936]: I0216 21:40:24.667608 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="308eef78-6d8a-40fe-8416-51efa88d39fc" containerName="nova-api-log" containerID="cri-o://e49f840d110cc59900dd965558a9f83d1ad7ba3b269e9d6e2f6a2da76900ae29" gracePeriod=30 Feb 16 21:40:24.668371 master-0 kubenswrapper[38936]: I0216 21:40:24.668307 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="308eef78-6d8a-40fe-8416-51efa88d39fc" containerName="nova-api-api" containerID="cri-o://f6ab133b1a2730073efb76f7e0263a4ac9f8bd6f9dfeccda0119e22e0ec4fe86" gracePeriod=30 Feb 16 21:40:24.694936 master-0 kubenswrapper[38936]: I0216 21:40:24.694857 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:40:24.695293 master-0 kubenswrapper[38936]: I0216 21:40:24.695251 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-metadata" containerID="cri-o://712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf" gracePeriod=30 Feb 16 21:40:24.696325 master-0 kubenswrapper[38936]: I0216 21:40:24.695175 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-log" containerID="cri-o://b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c" gracePeriod=30 Feb 16 21:40:25.395068 master-0 kubenswrapper[38936]: I0216 21:40:25.394982 38936 generic.go:334] "Generic (PLEG): container finished" podID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerID="b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c" exitCode=143 Feb 16 21:40:25.395595 master-0 kubenswrapper[38936]: I0216 21:40:25.395097 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ff2d05d-b1a8-4695-b186-f2422a5c8186","Type":"ContainerDied","Data":"b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c"} Feb 16 21:40:25.398225 master-0 kubenswrapper[38936]: I0216 21:40:25.398061 38936 generic.go:334] "Generic (PLEG): container finished" podID="308eef78-6d8a-40fe-8416-51efa88d39fc" containerID="f6ab133b1a2730073efb76f7e0263a4ac9f8bd6f9dfeccda0119e22e0ec4fe86" exitCode=0 Feb 16 21:40:25.398225 master-0 kubenswrapper[38936]: I0216 21:40:25.398099 38936 generic.go:334] "Generic (PLEG): container finished" podID="308eef78-6d8a-40fe-8416-51efa88d39fc" containerID="e49f840d110cc59900dd965558a9f83d1ad7ba3b269e9d6e2f6a2da76900ae29" exitCode=143 Feb 16 21:40:25.398225 master-0 kubenswrapper[38936]: I0216 21:40:25.398148 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"308eef78-6d8a-40fe-8416-51efa88d39fc","Type":"ContainerDied","Data":"f6ab133b1a2730073efb76f7e0263a4ac9f8bd6f9dfeccda0119e22e0ec4fe86"} Feb 16 21:40:25.398225 master-0 kubenswrapper[38936]: I0216 21:40:25.398175 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"308eef78-6d8a-40fe-8416-51efa88d39fc","Type":"ContainerDied","Data":"e49f840d110cc59900dd965558a9f83d1ad7ba3b269e9d6e2f6a2da76900ae29"} Feb 16 21:40:25.398225 master-0 kubenswrapper[38936]: I0216 21:40:25.398185 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"308eef78-6d8a-40fe-8416-51efa88d39fc","Type":"ContainerDied","Data":"ba3d0d98279d6d94c7e4577e621b00440426e2a3436e95fbaf33c8b832e2c744"} Feb 16 21:40:25.398225 master-0 kubenswrapper[38936]: I0216 21:40:25.398195 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba3d0d98279d6d94c7e4577e621b00440426e2a3436e95fbaf33c8b832e2c744" Feb 16 21:40:25.405493 master-0 kubenswrapper[38936]: I0216 21:40:25.405415 38936 generic.go:334] "Generic (PLEG): container finished" podID="90445da2-719c-482a-ab08-8ee50a317377" containerID="beeb3366c40b7904cbed4ed0893c846080f4c4a6c57567b11352e42c3d6a5646" exitCode=0 Feb 16 21:40:25.405699 master-0 kubenswrapper[38936]: I0216 21:40:25.405497 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"90445da2-719c-482a-ab08-8ee50a317377","Type":"ContainerDied","Data":"beeb3366c40b7904cbed4ed0893c846080f4c4a6c57567b11352e42c3d6a5646"} Feb 16 21:40:25.457315 master-0 kubenswrapper[38936]: I0216 21:40:25.457258 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:40:25.637329 master-0 kubenswrapper[38936]: I0216 21:40:25.637286 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:40:25.649220 master-0 kubenswrapper[38936]: I0216 21:40:25.649160 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/308eef78-6d8a-40fe-8416-51efa88d39fc-logs\") pod \"308eef78-6d8a-40fe-8416-51efa88d39fc\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " Feb 16 21:40:25.649551 master-0 kubenswrapper[38936]: I0216 21:40:25.649496 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308eef78-6d8a-40fe-8416-51efa88d39fc-logs" (OuterVolumeSpecName: "logs") pod "308eef78-6d8a-40fe-8416-51efa88d39fc" (UID: "308eef78-6d8a-40fe-8416-51efa88d39fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:40:25.649551 master-0 kubenswrapper[38936]: I0216 21:40:25.649514 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-combined-ca-bundle\") pod \"308eef78-6d8a-40fe-8416-51efa88d39fc\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " Feb 16 21:40:25.649664 master-0 kubenswrapper[38936]: I0216 21:40:25.649630 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-config-data\") pod \"308eef78-6d8a-40fe-8416-51efa88d39fc\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " Feb 16 21:40:25.649723 master-0 kubenswrapper[38936]: I0216 21:40:25.649713 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-public-tls-certs\") pod \"308eef78-6d8a-40fe-8416-51efa88d39fc\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " Feb 16 21:40:25.649876 master-0 kubenswrapper[38936]: I0216 21:40:25.649848 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gkbb\" (UniqueName: \"kubernetes.io/projected/308eef78-6d8a-40fe-8416-51efa88d39fc-kube-api-access-5gkbb\") pod \"308eef78-6d8a-40fe-8416-51efa88d39fc\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " Feb 16 21:40:25.649940 master-0 kubenswrapper[38936]: I0216 21:40:25.649876 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-internal-tls-certs\") pod \"308eef78-6d8a-40fe-8416-51efa88d39fc\" (UID: \"308eef78-6d8a-40fe-8416-51efa88d39fc\") " Feb 16 21:40:25.650275 master-0 kubenswrapper[38936]: I0216 21:40:25.650243 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-combined-ca-bundle\") pod \"90445da2-719c-482a-ab08-8ee50a317377\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " Feb 16 21:40:25.651321 master-0 kubenswrapper[38936]: I0216 21:40:25.651287 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/308eef78-6d8a-40fe-8416-51efa88d39fc-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:25.653054 master-0 kubenswrapper[38936]: I0216 21:40:25.653009 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308eef78-6d8a-40fe-8416-51efa88d39fc-kube-api-access-5gkbb" (OuterVolumeSpecName: "kube-api-access-5gkbb") pod "308eef78-6d8a-40fe-8416-51efa88d39fc" (UID: "308eef78-6d8a-40fe-8416-51efa88d39fc"). InnerVolumeSpecName "kube-api-access-5gkbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:25.680870 master-0 kubenswrapper[38936]: I0216 21:40:25.680814 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "308eef78-6d8a-40fe-8416-51efa88d39fc" (UID: "308eef78-6d8a-40fe-8416-51efa88d39fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:25.682030 master-0 kubenswrapper[38936]: I0216 21:40:25.681981 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90445da2-719c-482a-ab08-8ee50a317377" (UID: "90445da2-719c-482a-ab08-8ee50a317377"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:25.686033 master-0 kubenswrapper[38936]: I0216 21:40:25.685978 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-config-data" (OuterVolumeSpecName: "config-data") pod "308eef78-6d8a-40fe-8416-51efa88d39fc" (UID: "308eef78-6d8a-40fe-8416-51efa88d39fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:25.708732 master-0 kubenswrapper[38936]: I0216 21:40:25.708559 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "308eef78-6d8a-40fe-8416-51efa88d39fc" (UID: "308eef78-6d8a-40fe-8416-51efa88d39fc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:25.710877 master-0 kubenswrapper[38936]: I0216 21:40:25.710827 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "308eef78-6d8a-40fe-8416-51efa88d39fc" (UID: "308eef78-6d8a-40fe-8416-51efa88d39fc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:25.753300 master-0 kubenswrapper[38936]: I0216 21:40:25.753042 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-config-data\") pod \"90445da2-719c-482a-ab08-8ee50a317377\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " Feb 16 21:40:25.753741 master-0 kubenswrapper[38936]: I0216 21:40:25.753606 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb69\" (UniqueName: \"kubernetes.io/projected/90445da2-719c-482a-ab08-8ee50a317377-kube-api-access-8nb69\") pod \"90445da2-719c-482a-ab08-8ee50a317377\" (UID: \"90445da2-719c-482a-ab08-8ee50a317377\") " Feb 16 21:40:25.754596 master-0 kubenswrapper[38936]: I0216 21:40:25.754517 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gkbb\" (UniqueName: \"kubernetes.io/projected/308eef78-6d8a-40fe-8416-51efa88d39fc-kube-api-access-5gkbb\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:25.754596 master-0 kubenswrapper[38936]: I0216 21:40:25.754548 38936 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:25.754596 master-0 kubenswrapper[38936]: I0216 21:40:25.754563 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:25.754596 master-0 kubenswrapper[38936]: I0216 21:40:25.754575 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:25.754596 master-0 kubenswrapper[38936]: I0216 21:40:25.754587 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:25.754596 master-0 kubenswrapper[38936]: I0216 21:40:25.754598 38936 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/308eef78-6d8a-40fe-8416-51efa88d39fc-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:25.759021 master-0 kubenswrapper[38936]: I0216 21:40:25.758979 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90445da2-719c-482a-ab08-8ee50a317377-kube-api-access-8nb69" (OuterVolumeSpecName: "kube-api-access-8nb69") pod "90445da2-719c-482a-ab08-8ee50a317377" (UID: "90445da2-719c-482a-ab08-8ee50a317377"). InnerVolumeSpecName "kube-api-access-8nb69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:25.818690 master-0 kubenswrapper[38936]: I0216 21:40:25.818608 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-config-data" (OuterVolumeSpecName: "config-data") pod "90445da2-719c-482a-ab08-8ee50a317377" (UID: "90445da2-719c-482a-ab08-8ee50a317377"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:25.857951 master-0 kubenswrapper[38936]: I0216 21:40:25.857873 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nb69\" (UniqueName: \"kubernetes.io/projected/90445da2-719c-482a-ab08-8ee50a317377-kube-api-access-8nb69\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:25.857951 master-0 kubenswrapper[38936]: I0216 21:40:25.857940 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90445da2-719c-482a-ab08-8ee50a317377-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:26.421128 master-0 kubenswrapper[38936]: I0216 21:40:26.421066 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"90445da2-719c-482a-ab08-8ee50a317377","Type":"ContainerDied","Data":"b9c0965915f49445863e9b53a1f1b5be665f401c18f64480d09968063ef10d71"} Feb 16 21:40:26.421128 master-0 kubenswrapper[38936]: I0216 21:40:26.421105 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:40:26.421836 master-0 kubenswrapper[38936]: I0216 21:40:26.421148 38936 scope.go:117] "RemoveContainer" containerID="beeb3366c40b7904cbed4ed0893c846080f4c4a6c57567b11352e42c3d6a5646" Feb 16 21:40:26.422682 master-0 kubenswrapper[38936]: I0216 21:40:26.422374 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:40:26.472352 master-0 kubenswrapper[38936]: I0216 21:40:26.472297 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:40:26.484908 master-0 kubenswrapper[38936]: I0216 21:40:26.484849 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:40:26.616903 master-0 kubenswrapper[38936]: I0216 21:40:26.616811 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:26.689076 master-0 kubenswrapper[38936]: I0216 21:40:26.688906 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:40:26.689716 master-0 kubenswrapper[38936]: E0216 21:40:26.689684 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96444d1-ae55-4560-a3a6-b75072a1271f" containerName="dnsmasq-dns" Feb 16 21:40:26.689716 master-0 kubenswrapper[38936]: I0216 21:40:26.689711 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96444d1-ae55-4560-a3a6-b75072a1271f" containerName="dnsmasq-dns" Feb 16 21:40:26.689852 master-0 kubenswrapper[38936]: E0216 21:40:26.689750 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca15ca0-b308-47ca-ad27-856b6b2d928e" containerName="nova-manage" Feb 16 21:40:26.689852 master-0 kubenswrapper[38936]: I0216 21:40:26.689759 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca15ca0-b308-47ca-ad27-856b6b2d928e" containerName="nova-manage" Feb 16 21:40:26.689852 master-0 kubenswrapper[38936]: E0216 21:40:26.689782 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308eef78-6d8a-40fe-8416-51efa88d39fc" containerName="nova-api-log" Feb 16 21:40:26.689852 master-0 kubenswrapper[38936]: I0216 21:40:26.689790 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="308eef78-6d8a-40fe-8416-51efa88d39fc" containerName="nova-api-log" Feb 16 21:40:26.689852 master-0 kubenswrapper[38936]: E0216 21:40:26.689827 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf44af2-ce58-40e0-af03-ee3c9bcc1519" containerName="nova-manage" Feb 16 21:40:26.689852 master-0 kubenswrapper[38936]: I0216 21:40:26.689836 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf44af2-ce58-40e0-af03-ee3c9bcc1519" containerName="nova-manage" Feb 16 21:40:26.689852 master-0 kubenswrapper[38936]: E0216 21:40:26.689852 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="308eef78-6d8a-40fe-8416-51efa88d39fc" containerName="nova-api-api" Feb 16 21:40:26.690157 master-0 kubenswrapper[38936]: I0216 21:40:26.689862 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="308eef78-6d8a-40fe-8416-51efa88d39fc" containerName="nova-api-api" Feb 16 21:40:26.690157 master-0 kubenswrapper[38936]: E0216 21:40:26.689895 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96444d1-ae55-4560-a3a6-b75072a1271f" containerName="init" Feb 16 21:40:26.690157 master-0 kubenswrapper[38936]: I0216 21:40:26.689905 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96444d1-ae55-4560-a3a6-b75072a1271f" containerName="init" Feb 16 21:40:26.690157 master-0 kubenswrapper[38936]: E0216 21:40:26.689931 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90445da2-719c-482a-ab08-8ee50a317377" containerName="nova-scheduler-scheduler" Feb 16 21:40:26.690157 master-0 kubenswrapper[38936]: I0216 21:40:26.689939 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="90445da2-719c-482a-ab08-8ee50a317377" containerName="nova-scheduler-scheduler" Feb 16 21:40:26.690359 master-0 kubenswrapper[38936]: I0216 21:40:26.690265 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="308eef78-6d8a-40fe-8416-51efa88d39fc" containerName="nova-api-api" Feb 16 21:40:26.690359 master-0 kubenswrapper[38936]: I0216 21:40:26.690320 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="90445da2-719c-482a-ab08-8ee50a317377" containerName="nova-scheduler-scheduler" Feb 16 21:40:26.690359 master-0 kubenswrapper[38936]: I0216 21:40:26.690332 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96444d1-ae55-4560-a3a6-b75072a1271f" containerName="dnsmasq-dns" Feb 16 21:40:26.690359 master-0 kubenswrapper[38936]: I0216 21:40:26.690357 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="308eef78-6d8a-40fe-8416-51efa88d39fc" containerName="nova-api-log" Feb 16 21:40:26.690540 master-0 kubenswrapper[38936]: I0216 21:40:26.690379 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbf44af2-ce58-40e0-af03-ee3c9bcc1519" containerName="nova-manage" Feb 16 21:40:26.690540 master-0 kubenswrapper[38936]: I0216 21:40:26.690404 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca15ca0-b308-47ca-ad27-856b6b2d928e" containerName="nova-manage" Feb 16 21:40:26.691542 master-0 kubenswrapper[38936]: I0216 21:40:26.691512 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:40:26.694067 master-0 kubenswrapper[38936]: I0216 21:40:26.694019 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 21:40:26.706384 master-0 kubenswrapper[38936]: I0216 21:40:26.706307 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:26.760681 master-0 kubenswrapper[38936]: I0216 21:40:26.760571 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:40:26.816494 master-0 kubenswrapper[38936]: I0216 21:40:26.811867 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:26.817109 master-0 kubenswrapper[38936]: I0216 21:40:26.817056 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:40:26.819830 master-0 kubenswrapper[38936]: I0216 21:40:26.819783 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 21:40:26.820036 master-0 kubenswrapper[38936]: I0216 21:40:26.820015 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 21:40:26.820681 master-0 kubenswrapper[38936]: I0216 21:40:26.820627 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 21:40:26.850940 master-0 kubenswrapper[38936]: I0216 21:40:26.850864 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:26.902051 master-0 kubenswrapper[38936]: I0216 21:40:26.901979 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d120e9-90af-4136-a6a4-b09492100841-config-data\") pod \"nova-scheduler-0\" (UID: \"79d120e9-90af-4136-a6a4-b09492100841\") " pod="openstack/nova-scheduler-0" Feb 16 21:40:26.902051 master-0 kubenswrapper[38936]: I0216 21:40:26.902036 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d120e9-90af-4136-a6a4-b09492100841-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79d120e9-90af-4136-a6a4-b09492100841\") " pod="openstack/nova-scheduler-0" Feb 16 21:40:26.902325 master-0 kubenswrapper[38936]: I0216 21:40:26.902071 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zggwk\" (UniqueName: \"kubernetes.io/projected/79d120e9-90af-4136-a6a4-b09492100841-kube-api-access-zggwk\") pod \"nova-scheduler-0\" (UID: \"79d120e9-90af-4136-a6a4-b09492100841\") " pod="openstack/nova-scheduler-0" Feb 16 21:40:27.004512 master-0 kubenswrapper[38936]: I0216 21:40:27.004445 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-config-data\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.004748 master-0 kubenswrapper[38936]: I0216 21:40:27.004562 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d120e9-90af-4136-a6a4-b09492100841-config-data\") pod \"nova-scheduler-0\" (UID: \"79d120e9-90af-4136-a6a4-b09492100841\") " pod="openstack/nova-scheduler-0" Feb 16 21:40:27.004748 master-0 kubenswrapper[38936]: I0216 21:40:27.004594 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d120e9-90af-4136-a6a4-b09492100841-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79d120e9-90af-4136-a6a4-b09492100841\") " pod="openstack/nova-scheduler-0" Feb 16 21:40:27.004748 master-0 kubenswrapper[38936]: I0216 21:40:27.004623 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zggwk\" (UniqueName: \"kubernetes.io/projected/79d120e9-90af-4136-a6a4-b09492100841-kube-api-access-zggwk\") pod \"nova-scheduler-0\" (UID: \"79d120e9-90af-4136-a6a4-b09492100841\") " pod="openstack/nova-scheduler-0" Feb 16 21:40:27.004748 master-0 kubenswrapper[38936]: I0216 21:40:27.004678 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-public-tls-certs\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.004923 master-0 kubenswrapper[38936]: I0216 21:40:27.004873 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.005215 master-0 kubenswrapper[38936]: I0216 21:40:27.005165 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.005436 master-0 kubenswrapper[38936]: I0216 21:40:27.005390 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj8kh\" (UniqueName: \"kubernetes.io/projected/e3e12d96-d7b4-4211-b286-ac9e27097f74-kube-api-access-wj8kh\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.005809 master-0 kubenswrapper[38936]: I0216 21:40:27.005758 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3e12d96-d7b4-4211-b286-ac9e27097f74-logs\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.008462 master-0 kubenswrapper[38936]: I0216 21:40:27.008433 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79d120e9-90af-4136-a6a4-b09492100841-config-data\") pod \"nova-scheduler-0\" (UID: \"79d120e9-90af-4136-a6a4-b09492100841\") " pod="openstack/nova-scheduler-0" Feb 16 21:40:27.008765 master-0 kubenswrapper[38936]: I0216 21:40:27.008719 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d120e9-90af-4136-a6a4-b09492100841-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79d120e9-90af-4136-a6a4-b09492100841\") " pod="openstack/nova-scheduler-0" Feb 16 21:40:27.062393 master-0 kubenswrapper[38936]: I0216 21:40:27.062195 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zggwk\" (UniqueName: \"kubernetes.io/projected/79d120e9-90af-4136-a6a4-b09492100841-kube-api-access-zggwk\") pod \"nova-scheduler-0\" (UID: \"79d120e9-90af-4136-a6a4-b09492100841\") " pod="openstack/nova-scheduler-0" Feb 16 21:40:27.108043 master-0 kubenswrapper[38936]: I0216 21:40:27.107979 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-config-data\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.108313 master-0 kubenswrapper[38936]: I0216 21:40:27.108114 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-public-tls-certs\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.108313 master-0 kubenswrapper[38936]: I0216 21:40:27.108240 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.108477 master-0 kubenswrapper[38936]: I0216 21:40:27.108321 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.108477 master-0 kubenswrapper[38936]: I0216 21:40:27.108402 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj8kh\" (UniqueName: \"kubernetes.io/projected/e3e12d96-d7b4-4211-b286-ac9e27097f74-kube-api-access-wj8kh\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.108620 master-0 kubenswrapper[38936]: I0216 21:40:27.108521 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3e12d96-d7b4-4211-b286-ac9e27097f74-logs\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.109248 master-0 kubenswrapper[38936]: I0216 21:40:27.109194 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3e12d96-d7b4-4211-b286-ac9e27097f74-logs\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.111563 master-0 kubenswrapper[38936]: I0216 21:40:27.111519 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.111682 master-0 kubenswrapper[38936]: I0216 21:40:27.111533 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-public-tls-certs\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.112157 master-0 kubenswrapper[38936]: I0216 21:40:27.112057 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.112294 master-0 kubenswrapper[38936]: I0216 21:40:27.112201 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e12d96-d7b4-4211-b286-ac9e27097f74-config-data\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.124625 master-0 kubenswrapper[38936]: I0216 21:40:27.124553 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj8kh\" (UniqueName: \"kubernetes.io/projected/e3e12d96-d7b4-4211-b286-ac9e27097f74-kube-api-access-wj8kh\") pod \"nova-api-0\" (UID: \"e3e12d96-d7b4-4211-b286-ac9e27097f74\") " pod="openstack/nova-api-0" Feb 16 21:40:27.144604 master-0 kubenswrapper[38936]: I0216 21:40:27.144537 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 21:40:27.324564 master-0 kubenswrapper[38936]: I0216 21:40:27.324499 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 21:40:27.641418 master-0 kubenswrapper[38936]: I0216 21:40:27.641328 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 21:40:27.798548 master-0 kubenswrapper[38936]: W0216 21:40:27.798496 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79d120e9_90af_4136_a6a4_b09492100841.slice/crio-3fe0df0bf319882c4f96a989112a8e9ebcd0496b8661adf8225bf242f981b860 WatchSource:0}: Error finding container 3fe0df0bf319882c4f96a989112a8e9ebcd0496b8661adf8225bf242f981b860: Status 404 returned error can't find the container with id 3fe0df0bf319882c4f96a989112a8e9ebcd0496b8661adf8225bf242f981b860 Feb 16 21:40:27.803625 master-0 kubenswrapper[38936]: I0216 21:40:27.803587 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 21:40:27.828973 master-0 kubenswrapper[38936]: I0216 21:40:27.828634 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": read tcp 10.128.0.2:52070->10.128.1.12:8775: read: connection reset by peer" Feb 16 21:40:27.828973 master-0 kubenswrapper[38936]: I0216 21:40:27.828737 38936 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": read tcp 10.128.0.2:52086->10.128.1.12:8775: read: connection reset by peer" Feb 16 21:40:27.889468 master-0 kubenswrapper[38936]: I0216 21:40:27.889325 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308eef78-6d8a-40fe-8416-51efa88d39fc" path="/var/lib/kubelet/pods/308eef78-6d8a-40fe-8416-51efa88d39fc/volumes" Feb 16 21:40:27.890763 master-0 kubenswrapper[38936]: I0216 21:40:27.890732 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90445da2-719c-482a-ab08-8ee50a317377" path="/var/lib/kubelet/pods/90445da2-719c-482a-ab08-8ee50a317377/volumes" Feb 16 21:40:28.355503 master-0 kubenswrapper[38936]: I0216 21:40:28.355431 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:40:28.467349 master-0 kubenswrapper[38936]: I0216 21:40:28.467284 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-nova-metadata-tls-certs\") pod \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " Feb 16 21:40:28.467606 master-0 kubenswrapper[38936]: I0216 21:40:28.467475 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-combined-ca-bundle\") pod \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " Feb 16 21:40:28.467606 master-0 kubenswrapper[38936]: I0216 21:40:28.467527 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ff2d05d-b1a8-4695-b186-f2422a5c8186-logs\") pod \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " Feb 16 21:40:28.467606 master-0 kubenswrapper[38936]: I0216 21:40:28.467555 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdr8z\" (UniqueName: \"kubernetes.io/projected/6ff2d05d-b1a8-4695-b186-f2422a5c8186-kube-api-access-kdr8z\") pod \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " Feb 16 21:40:28.467796 master-0 kubenswrapper[38936]: I0216 21:40:28.467635 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-config-data\") pod \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\" (UID: \"6ff2d05d-b1a8-4695-b186-f2422a5c8186\") " Feb 16 21:40:28.470471 master-0 kubenswrapper[38936]: I0216 21:40:28.470419 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff2d05d-b1a8-4695-b186-f2422a5c8186-logs" (OuterVolumeSpecName: "logs") pod "6ff2d05d-b1a8-4695-b186-f2422a5c8186" (UID: "6ff2d05d-b1a8-4695-b186-f2422a5c8186"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 21:40:28.473331 master-0 kubenswrapper[38936]: I0216 21:40:28.473287 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff2d05d-b1a8-4695-b186-f2422a5c8186-kube-api-access-kdr8z" (OuterVolumeSpecName: "kube-api-access-kdr8z") pod "6ff2d05d-b1a8-4695-b186-f2422a5c8186" (UID: "6ff2d05d-b1a8-4695-b186-f2422a5c8186"). InnerVolumeSpecName "kube-api-access-kdr8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:40:28.474883 master-0 kubenswrapper[38936]: I0216 21:40:28.474845 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3e12d96-d7b4-4211-b286-ac9e27097f74","Type":"ContainerStarted","Data":"a69af9cc335e44a7b62f6918d3c6b7731809bdeab2a6876b809f55cfc9578065"} Feb 16 21:40:28.474955 master-0 kubenswrapper[38936]: I0216 21:40:28.474891 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3e12d96-d7b4-4211-b286-ac9e27097f74","Type":"ContainerStarted","Data":"8ada0c0770db8092944ab87f9b32b76e82649de7104e07bed2175e949dbac9c9"} Feb 16 21:40:28.474955 master-0 kubenswrapper[38936]: I0216 21:40:28.474902 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3e12d96-d7b4-4211-b286-ac9e27097f74","Type":"ContainerStarted","Data":"c8f1f446e9b8e6fdbb891808a993617e02d821a9b182e5cbef955b7c5820a6f6"} Feb 16 21:40:28.478335 master-0 kubenswrapper[38936]: I0216 21:40:28.478290 38936 generic.go:334] "Generic (PLEG): container finished" podID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerID="712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf" exitCode=0 Feb 16 21:40:28.478584 master-0 kubenswrapper[38936]: I0216 21:40:28.478544 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:40:28.478820 master-0 kubenswrapper[38936]: I0216 21:40:28.478786 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ff2d05d-b1a8-4695-b186-f2422a5c8186","Type":"ContainerDied","Data":"712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf"} Feb 16 21:40:28.478933 master-0 kubenswrapper[38936]: I0216 21:40:28.478846 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ff2d05d-b1a8-4695-b186-f2422a5c8186","Type":"ContainerDied","Data":"166d52e77aaa169bfcec6074ac9ef632401c90667898ae736116e59464924ff2"} Feb 16 21:40:28.478933 master-0 kubenswrapper[38936]: I0216 21:40:28.478868 38936 scope.go:117] "RemoveContainer" containerID="712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf" Feb 16 21:40:28.488515 master-0 kubenswrapper[38936]: I0216 21:40:28.488299 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79d120e9-90af-4136-a6a4-b09492100841","Type":"ContainerStarted","Data":"5b9cf5d427f1fba775def2d5999b34e559786c5ae83f084c260398ff168557ee"} Feb 16 21:40:28.488515 master-0 kubenswrapper[38936]: I0216 21:40:28.488350 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79d120e9-90af-4136-a6a4-b09492100841","Type":"ContainerStarted","Data":"3fe0df0bf319882c4f96a989112a8e9ebcd0496b8661adf8225bf242f981b860"} Feb 16 21:40:28.511023 master-0 kubenswrapper[38936]: I0216 21:40:28.510934 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-config-data" (OuterVolumeSpecName: "config-data") pod "6ff2d05d-b1a8-4695-b186-f2422a5c8186" (UID: "6ff2d05d-b1a8-4695-b186-f2422a5c8186"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:28.516113 master-0 kubenswrapper[38936]: I0216 21:40:28.516044 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ff2d05d-b1a8-4695-b186-f2422a5c8186" (UID: "6ff2d05d-b1a8-4695-b186-f2422a5c8186"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:28.525626 master-0 kubenswrapper[38936]: I0216 21:40:28.525542 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.525521988 podStartE2EDuration="2.525521988s" podCreationTimestamp="2026-02-16 21:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:28.511007191 +0000 UTC m=+1058.863010553" watchObservedRunningTime="2026-02-16 21:40:28.525521988 +0000 UTC m=+1058.877525350" Feb 16 21:40:28.541067 master-0 kubenswrapper[38936]: I0216 21:40:28.540961 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.54094138 podStartE2EDuration="2.54094138s" podCreationTimestamp="2026-02-16 21:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:28.536400849 +0000 UTC m=+1058.888404211" watchObservedRunningTime="2026-02-16 21:40:28.54094138 +0000 UTC m=+1058.892944732" Feb 16 21:40:28.565964 master-0 kubenswrapper[38936]: I0216 21:40:28.564908 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6ff2d05d-b1a8-4695-b186-f2422a5c8186" (UID: "6ff2d05d-b1a8-4695-b186-f2422a5c8186"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:40:28.571754 master-0 kubenswrapper[38936]: I0216 21:40:28.571678 38936 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:28.571754 master-0 kubenswrapper[38936]: I0216 21:40:28.571751 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:28.571754 master-0 kubenswrapper[38936]: I0216 21:40:28.571761 38936 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ff2d05d-b1a8-4695-b186-f2422a5c8186-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:28.571965 master-0 kubenswrapper[38936]: I0216 21:40:28.571774 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdr8z\" (UniqueName: \"kubernetes.io/projected/6ff2d05d-b1a8-4695-b186-f2422a5c8186-kube-api-access-kdr8z\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:28.571965 master-0 kubenswrapper[38936]: I0216 21:40:28.571785 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff2d05d-b1a8-4695-b186-f2422a5c8186-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 21:40:28.674246 master-0 kubenswrapper[38936]: I0216 21:40:28.667233 38936 scope.go:117] "RemoveContainer" containerID="b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c" Feb 16 21:40:28.686975 master-0 kubenswrapper[38936]: I0216 21:40:28.686841 38936 scope.go:117] "RemoveContainer" containerID="712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf" Feb 16 21:40:28.687603 master-0 kubenswrapper[38936]: E0216 21:40:28.687533 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf\": container with ID starting with 712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf not found: ID does not exist" containerID="712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf" Feb 16 21:40:28.687686 master-0 kubenswrapper[38936]: I0216 21:40:28.687621 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf"} err="failed to get container status \"712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf\": rpc error: code = NotFound desc = could not find container \"712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf\": container with ID starting with 712b0b77fce34db59a1c3896153674d60b1c644b7b4ef8ebcc83f76c6ed0a3cf not found: ID does not exist" Feb 16 21:40:28.687686 master-0 kubenswrapper[38936]: I0216 21:40:28.687673 38936 scope.go:117] "RemoveContainer" containerID="b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c" Feb 16 21:40:28.687970 master-0 kubenswrapper[38936]: E0216 21:40:28.687943 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c\": container with ID starting with b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c not found: ID does not exist" containerID="b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c" Feb 16 21:40:28.688056 master-0 kubenswrapper[38936]: I0216 21:40:28.687974 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c"} err="failed to get container status \"b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c\": rpc error: code = NotFound desc = could not find container \"b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c\": container with ID starting with b371a0e491dd5a1767ac9f6b77851dacbc56aa42712bd0f55256b4239e65097c not found: ID does not exist" Feb 16 21:40:28.844670 master-0 kubenswrapper[38936]: I0216 21:40:28.843785 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:40:28.873676 master-0 kubenswrapper[38936]: I0216 21:40:28.872964 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:40:28.894862 master-0 kubenswrapper[38936]: I0216 21:40:28.894793 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:40:28.895528 master-0 kubenswrapper[38936]: E0216 21:40:28.895499 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-log" Feb 16 21:40:28.895528 master-0 kubenswrapper[38936]: I0216 21:40:28.895521 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-log" Feb 16 21:40:28.895632 master-0 kubenswrapper[38936]: E0216 21:40:28.895569 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-metadata" Feb 16 21:40:28.895632 master-0 kubenswrapper[38936]: I0216 21:40:28.895575 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-metadata" Feb 16 21:40:28.895938 master-0 kubenswrapper[38936]: I0216 21:40:28.895911 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-metadata" Feb 16 21:40:28.895988 master-0 kubenswrapper[38936]: I0216 21:40:28.895958 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" containerName="nova-metadata-log" Feb 16 21:40:28.897454 master-0 kubenswrapper[38936]: I0216 21:40:28.897412 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:40:28.898699 master-0 kubenswrapper[38936]: I0216 21:40:28.898602 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:40:28.900171 master-0 kubenswrapper[38936]: I0216 21:40:28.900094 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 21:40:28.902137 master-0 kubenswrapper[38936]: I0216 21:40:28.902082 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 21:40:28.984216 master-0 kubenswrapper[38936]: I0216 21:40:28.984161 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-config-data\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:28.984474 master-0 kubenswrapper[38936]: I0216 21:40:28.984259 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:28.984474 master-0 kubenswrapper[38936]: I0216 21:40:28.984359 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn52s\" (UniqueName: \"kubernetes.io/projected/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-kube-api-access-nn52s\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:28.984474 master-0 kubenswrapper[38936]: I0216 21:40:28.984398 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-logs\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:28.984474 master-0 kubenswrapper[38936]: I0216 21:40:28.984461 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:29.088191 master-0 kubenswrapper[38936]: I0216 21:40:29.088045 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:29.088448 master-0 kubenswrapper[38936]: I0216 21:40:29.088280 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-config-data\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:29.088448 master-0 kubenswrapper[38936]: I0216 21:40:29.088393 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:29.088555 master-0 kubenswrapper[38936]: I0216 21:40:29.088502 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn52s\" (UniqueName: \"kubernetes.io/projected/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-kube-api-access-nn52s\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:29.088555 master-0 kubenswrapper[38936]: I0216 21:40:29.088543 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-logs\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:29.089137 master-0 kubenswrapper[38936]: I0216 21:40:29.089098 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-logs\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:29.093201 master-0 kubenswrapper[38936]: I0216 21:40:29.093157 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-config-data\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:29.093861 master-0 kubenswrapper[38936]: I0216 21:40:29.093818 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:29.093997 master-0 kubenswrapper[38936]: I0216 21:40:29.093968 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:29.106533 master-0 kubenswrapper[38936]: I0216 21:40:29.106462 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn52s\" (UniqueName: \"kubernetes.io/projected/6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f-kube-api-access-nn52s\") pod \"nova-metadata-0\" (UID: \"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f\") " pod="openstack/nova-metadata-0" Feb 16 21:40:29.219844 master-0 kubenswrapper[38936]: I0216 21:40:29.219758 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 21:40:29.729372 master-0 kubenswrapper[38936]: I0216 21:40:29.729297 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 21:40:29.730293 master-0 kubenswrapper[38936]: W0216 21:40:29.730252 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c58f4af_d0e6_44f9_a5a9_b591ffd8df3f.slice/crio-4c8cd9158573aed5713e3f1bb9441e400a8f595376a503c52c27dc7336438e6d WatchSource:0}: Error finding container 4c8cd9158573aed5713e3f1bb9441e400a8f595376a503c52c27dc7336438e6d: Status 404 returned error can't find the container with id 4c8cd9158573aed5713e3f1bb9441e400a8f595376a503c52c27dc7336438e6d Feb 16 21:40:29.891611 master-0 kubenswrapper[38936]: I0216 21:40:29.891547 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ff2d05d-b1a8-4695-b186-f2422a5c8186" path="/var/lib/kubelet/pods/6ff2d05d-b1a8-4695-b186-f2422a5c8186/volumes" Feb 16 21:40:30.527498 master-0 kubenswrapper[38936]: I0216 21:40:30.527431 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f","Type":"ContainerStarted","Data":"c3f576d7d2b18651ade9687aecf2045a522454b26f6386fdec7e730b03ecfe02"} Feb 16 21:40:30.527498 master-0 kubenswrapper[38936]: I0216 21:40:30.527505 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f","Type":"ContainerStarted","Data":"3ca980a413b7d8d27b660672d40a60940da07fd3a57adc7825d1a3bd0d9ca743"} Feb 16 21:40:30.527866 master-0 kubenswrapper[38936]: I0216 21:40:30.527523 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f","Type":"ContainerStarted","Data":"4c8cd9158573aed5713e3f1bb9441e400a8f595376a503c52c27dc7336438e6d"} Feb 16 21:40:30.563862 master-0 kubenswrapper[38936]: I0216 21:40:30.563787 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.56376601 podStartE2EDuration="2.56376601s" podCreationTimestamp="2026-02-16 21:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:40:30.558513159 +0000 UTC m=+1060.910516521" watchObservedRunningTime="2026-02-16 21:40:30.56376601 +0000 UTC m=+1060.915769372" Feb 16 21:40:32.325414 master-0 kubenswrapper[38936]: I0216 21:40:32.325309 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 21:40:34.221025 master-0 kubenswrapper[38936]: I0216 21:40:34.220954 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:40:34.221025 master-0 kubenswrapper[38936]: I0216 21:40:34.221031 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 21:40:37.153088 master-0 kubenswrapper[38936]: I0216 21:40:37.152997 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:40:37.153088 master-0 kubenswrapper[38936]: I0216 21:40:37.153097 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 21:40:37.325328 master-0 kubenswrapper[38936]: I0216 21:40:37.325230 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 21:40:37.361830 master-0 kubenswrapper[38936]: I0216 21:40:37.361761 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 21:40:37.653304 master-0 kubenswrapper[38936]: I0216 21:40:37.653242 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 21:40:38.166913 master-0 kubenswrapper[38936]: I0216 21:40:38.166837 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e3e12d96-d7b4-4211-b286-ac9e27097f74" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.21:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:40:38.167526 master-0 kubenswrapper[38936]: I0216 21:40:38.166912 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e3e12d96-d7b4-4211-b286-ac9e27097f74" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.21:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:40:39.223197 master-0 kubenswrapper[38936]: I0216 21:40:39.222590 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:40:39.223197 master-0 kubenswrapper[38936]: I0216 21:40:39.222788 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 21:40:40.233878 master-0 kubenswrapper[38936]: I0216 21:40:40.233797 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.22:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:40:40.234554 master-0 kubenswrapper[38936]: I0216 21:40:40.233816 38936 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6c58f4af-d0e6-44f9-a5a9-b591ffd8df3f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.22:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 21:40:47.151756 master-0 kubenswrapper[38936]: I0216 21:40:47.151687 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:40:47.152480 master-0 kubenswrapper[38936]: I0216 21:40:47.152363 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:40:47.152870 master-0 kubenswrapper[38936]: I0216 21:40:47.152821 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 21:40:47.162184 master-0 kubenswrapper[38936]: I0216 21:40:47.162075 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:40:47.746721 master-0 kubenswrapper[38936]: I0216 21:40:47.742696 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 21:40:47.750664 master-0 kubenswrapper[38936]: I0216 21:40:47.749533 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 21:40:49.227414 master-0 kubenswrapper[38936]: I0216 21:40:49.226941 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:40:49.231364 master-0 kubenswrapper[38936]: I0216 21:40:49.231303 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 21:40:49.236484 master-0 kubenswrapper[38936]: I0216 21:40:49.236413 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:40:49.782273 master-0 kubenswrapper[38936]: I0216 21:40:49.782172 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 21:41:17.427764 master-0 kubenswrapper[38936]: I0216 21:41:17.427666 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-8c88f"] Feb 16 21:41:17.429014 master-0 kubenswrapper[38936]: I0216 21:41:17.428973 38936 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" podUID="ee0e3566-8d48-46f0-8f11-d044fecd942a" containerName="sushy-emulator" containerID="cri-o://0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741" gracePeriod=30 Feb 16 21:41:18.020944 master-0 kubenswrapper[38936]: I0216 21:41:18.020873 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:41:18.131891 master-0 kubenswrapper[38936]: I0216 21:41:18.131823 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/ee0e3566-8d48-46f0-8f11-d044fecd942a-sushy-emulator-config\") pod \"ee0e3566-8d48-46f0-8f11-d044fecd942a\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " Feb 16 21:41:18.132278 master-0 kubenswrapper[38936]: I0216 21:41:18.132037 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/ee0e3566-8d48-46f0-8f11-d044fecd942a-os-client-config\") pod \"ee0e3566-8d48-46f0-8f11-d044fecd942a\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " Feb 16 21:41:18.132278 master-0 kubenswrapper[38936]: I0216 21:41:18.132097 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmr94\" (UniqueName: \"kubernetes.io/projected/ee0e3566-8d48-46f0-8f11-d044fecd942a-kube-api-access-xmr94\") pod \"ee0e3566-8d48-46f0-8f11-d044fecd942a\" (UID: \"ee0e3566-8d48-46f0-8f11-d044fecd942a\") " Feb 16 21:41:18.132620 master-0 kubenswrapper[38936]: I0216 21:41:18.132567 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee0e3566-8d48-46f0-8f11-d044fecd942a-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "ee0e3566-8d48-46f0-8f11-d044fecd942a" (UID: "ee0e3566-8d48-46f0-8f11-d044fecd942a"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:41:18.137269 master-0 kubenswrapper[38936]: I0216 21:41:18.137214 38936 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/ee0e3566-8d48-46f0-8f11-d044fecd942a-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:41:18.143010 master-0 kubenswrapper[38936]: I0216 21:41:18.142953 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0e3566-8d48-46f0-8f11-d044fecd942a-kube-api-access-xmr94" (OuterVolumeSpecName: "kube-api-access-xmr94") pod "ee0e3566-8d48-46f0-8f11-d044fecd942a" (UID: "ee0e3566-8d48-46f0-8f11-d044fecd942a"). InnerVolumeSpecName "kube-api-access-xmr94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:41:18.152965 master-0 kubenswrapper[38936]: I0216 21:41:18.152854 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0e3566-8d48-46f0-8f11-d044fecd942a-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "ee0e3566-8d48-46f0-8f11-d044fecd942a" (UID: "ee0e3566-8d48-46f0-8f11-d044fecd942a"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:41:18.176415 master-0 kubenswrapper[38936]: I0216 21:41:18.176354 38936 generic.go:334] "Generic (PLEG): container finished" podID="ee0e3566-8d48-46f0-8f11-d044fecd942a" containerID="0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741" exitCode=0 Feb 16 21:41:18.176415 master-0 kubenswrapper[38936]: I0216 21:41:18.176411 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" event={"ID":"ee0e3566-8d48-46f0-8f11-d044fecd942a","Type":"ContainerDied","Data":"0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741"} Feb 16 21:41:18.176735 master-0 kubenswrapper[38936]: I0216 21:41:18.176443 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" event={"ID":"ee0e3566-8d48-46f0-8f11-d044fecd942a","Type":"ContainerDied","Data":"fb1c5b2b4d80fd53196369351d9844acdbc5f400c1ef11b5a8e9ac112ce7d435"} Feb 16 21:41:18.176735 master-0 kubenswrapper[38936]: I0216 21:41:18.176460 38936 scope.go:117] "RemoveContainer" containerID="0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741" Feb 16 21:41:18.176735 master-0 kubenswrapper[38936]: I0216 21:41:18.176598 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-8c88f" Feb 16 21:41:18.184850 master-0 kubenswrapper[38936]: I0216 21:41:18.184803 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-htzbf"] Feb 16 21:41:18.185568 master-0 kubenswrapper[38936]: E0216 21:41:18.185546 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0e3566-8d48-46f0-8f11-d044fecd942a" containerName="sushy-emulator" Feb 16 21:41:18.185669 master-0 kubenswrapper[38936]: I0216 21:41:18.185640 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0e3566-8d48-46f0-8f11-d044fecd942a" containerName="sushy-emulator" Feb 16 21:41:18.186080 master-0 kubenswrapper[38936]: I0216 21:41:18.186064 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0e3566-8d48-46f0-8f11-d044fecd942a" containerName="sushy-emulator" Feb 16 21:41:18.187232 master-0 kubenswrapper[38936]: I0216 21:41:18.187211 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:18.190476 master-0 kubenswrapper[38936]: I0216 21:41:18.190420 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 16 21:41:18.197802 master-0 kubenswrapper[38936]: I0216 21:41:18.197717 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-htzbf"] Feb 16 21:41:18.239424 master-0 kubenswrapper[38936]: I0216 21:41:18.239347 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d-os-client-config\") pod \"sushy-emulator-64488c485f-htzbf\" (UID: \"8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d\") " pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:18.239816 master-0 kubenswrapper[38936]: I0216 21:41:18.239583 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2747\" (UniqueName: \"kubernetes.io/projected/8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d-kube-api-access-q2747\") pod \"sushy-emulator-64488c485f-htzbf\" (UID: \"8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d\") " pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:18.239816 master-0 kubenswrapper[38936]: I0216 21:41:18.239624 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-htzbf\" (UID: \"8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d\") " pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:18.239932 master-0 kubenswrapper[38936]: I0216 21:41:18.239907 38936 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/ee0e3566-8d48-46f0-8f11-d044fecd942a-os-client-config\") on node \"master-0\" DevicePath \"\"" Feb 16 21:41:18.239972 master-0 kubenswrapper[38936]: I0216 21:41:18.239936 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmr94\" (UniqueName: \"kubernetes.io/projected/ee0e3566-8d48-46f0-8f11-d044fecd942a-kube-api-access-xmr94\") on node \"master-0\" DevicePath \"\"" Feb 16 21:41:18.260068 master-0 kubenswrapper[38936]: I0216 21:41:18.260009 38936 scope.go:117] "RemoveContainer" containerID="0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741" Feb 16 21:41:18.260763 master-0 kubenswrapper[38936]: E0216 21:41:18.260702 38936 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741\": container with ID starting with 0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741 not found: ID does not exist" containerID="0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741" Feb 16 21:41:18.260854 master-0 kubenswrapper[38936]: I0216 21:41:18.260752 38936 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741"} err="failed to get container status \"0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741\": rpc error: code = NotFound desc = could not find container \"0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741\": container with ID starting with 0afcce254abd8c8be1869a01d306a01733a29e4e5bccc7a689f477788e4f7741 not found: ID does not exist" Feb 16 21:41:18.283081 master-0 kubenswrapper[38936]: I0216 21:41:18.282934 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-8c88f"] Feb 16 21:41:18.294822 master-0 kubenswrapper[38936]: I0216 21:41:18.294750 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-8c88f"] Feb 16 21:41:18.344613 master-0 kubenswrapper[38936]: I0216 21:41:18.344538 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d-os-client-config\") pod \"sushy-emulator-64488c485f-htzbf\" (UID: \"8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d\") " pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:18.346022 master-0 kubenswrapper[38936]: I0216 21:41:18.344790 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2747\" (UniqueName: \"kubernetes.io/projected/8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d-kube-api-access-q2747\") pod \"sushy-emulator-64488c485f-htzbf\" (UID: \"8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d\") " pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:18.346022 master-0 kubenswrapper[38936]: I0216 21:41:18.344819 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-htzbf\" (UID: \"8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d\") " pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:18.348364 master-0 kubenswrapper[38936]: I0216 21:41:18.347307 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-htzbf\" (UID: \"8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d\") " pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:18.350998 master-0 kubenswrapper[38936]: I0216 21:41:18.350947 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d-os-client-config\") pod \"sushy-emulator-64488c485f-htzbf\" (UID: \"8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d\") " pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:18.365276 master-0 kubenswrapper[38936]: I0216 21:41:18.365220 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2747\" (UniqueName: \"kubernetes.io/projected/8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d-kube-api-access-q2747\") pod \"sushy-emulator-64488c485f-htzbf\" (UID: \"8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d\") " pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:18.563378 master-0 kubenswrapper[38936]: I0216 21:41:18.563246 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:19.101493 master-0 kubenswrapper[38936]: W0216 21:41:19.101425 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fdd8d32_5ed9_438f_8b3d_a6e3ee56ff2d.slice/crio-fe8c768a302751c7ae8a0c7fd4c50a53c2f6609e3a60f24bd3d29e649d92eac2 WatchSource:0}: Error finding container fe8c768a302751c7ae8a0c7fd4c50a53c2f6609e3a60f24bd3d29e649d92eac2: Status 404 returned error can't find the container with id fe8c768a302751c7ae8a0c7fd4c50a53c2f6609e3a60f24bd3d29e649d92eac2 Feb 16 21:41:19.104734 master-0 kubenswrapper[38936]: I0216 21:41:19.104633 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-htzbf"] Feb 16 21:41:19.190518 master-0 kubenswrapper[38936]: I0216 21:41:19.190450 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" event={"ID":"8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d","Type":"ContainerStarted","Data":"fe8c768a302751c7ae8a0c7fd4c50a53c2f6609e3a60f24bd3d29e649d92eac2"} Feb 16 21:41:19.889232 master-0 kubenswrapper[38936]: I0216 21:41:19.889167 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee0e3566-8d48-46f0-8f11-d044fecd942a" path="/var/lib/kubelet/pods/ee0e3566-8d48-46f0-8f11-d044fecd942a/volumes" Feb 16 21:41:20.205691 master-0 kubenswrapper[38936]: I0216 21:41:20.205541 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" event={"ID":"8fdd8d32-5ed9-438f-8b3d-a6e3ee56ff2d","Type":"ContainerStarted","Data":"731bafdbc2e7b11b5c52694df0eeac0f8be111eb89ed29957d9fbf18b1e13f11"} Feb 16 21:41:20.231667 master-0 kubenswrapper[38936]: I0216 21:41:20.231540 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" podStartSLOduration=2.2315157660000002 podStartE2EDuration="2.231515766s" podCreationTimestamp="2026-02-16 21:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 21:41:20.229780959 +0000 UTC m=+1110.581784341" watchObservedRunningTime="2026-02-16 21:41:20.231515766 +0000 UTC m=+1110.583519128" Feb 16 21:41:28.564285 master-0 kubenswrapper[38936]: I0216 21:41:28.564203 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:28.564285 master-0 kubenswrapper[38936]: I0216 21:41:28.564286 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:28.575170 master-0 kubenswrapper[38936]: I0216 21:41:28.575068 38936 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:29.338643 master-0 kubenswrapper[38936]: I0216 21:41:29.338568 38936 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-64488c485f-htzbf" Feb 16 21:41:39.434748 master-0 kubenswrapper[38936]: E0216 21:41:39.434618 38936 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:38962->192.168.32.10:38045: write tcp 192.168.32.10:38962->192.168.32.10:38045: write: broken pipe Feb 16 21:42:58.345471 master-0 kubenswrapper[38936]: I0216 21:42:58.345401 38936 scope.go:117] "RemoveContainer" containerID="3873e418fbe1888dda88c8ae062427acb57798ef601e34b15ae1d295adf9215f" Feb 16 21:42:58.382432 master-0 kubenswrapper[38936]: I0216 21:42:58.382376 38936 scope.go:117] "RemoveContainer" containerID="91f79cc7984d93a5e36521271122c17969f7ef5e80e3bcfe605d7a283ba1cd0d" Feb 16 21:42:58.451796 master-0 kubenswrapper[38936]: I0216 21:42:58.451743 38936 scope.go:117] "RemoveContainer" containerID="4933899af8057b08b10fcdcf90edb599ecef52e406afa529dd623f56950f9e05" Feb 16 21:43:46.177451 master-0 kubenswrapper[38936]: I0216 21:43:46.177378 38936 trace.go:236] Trace[646834476]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (16-Feb-2026 21:43:44.356) (total time: 1820ms): Feb 16 21:43:46.177451 master-0 kubenswrapper[38936]: Trace[646834476]: [1.82059002s] [1.82059002s] END Feb 16 21:43:58.601731 master-0 kubenswrapper[38936]: I0216 21:43:58.601666 38936 scope.go:117] "RemoveContainer" containerID="47ea8ef4cdc91a083bbba85843b6f6710d5786053128103ed8cf484c75a6e412" Feb 16 21:43:58.628909 master-0 kubenswrapper[38936]: I0216 21:43:58.628841 38936 scope.go:117] "RemoveContainer" containerID="defb9a28af561a177f019316552118ccc95154f90eb18819e2620510b24eccd8" Feb 16 21:43:58.658344 master-0 kubenswrapper[38936]: I0216 21:43:58.658272 38936 scope.go:117] "RemoveContainer" containerID="00a4fc3bbf18bc3cb9537dd8d4ec9038d4f790fa7ba1dea9e33affee71ef2a28" Feb 16 21:43:58.724481 master-0 kubenswrapper[38936]: I0216 21:43:58.724422 38936 scope.go:117] "RemoveContainer" containerID="9b40ae4cd170384825d22de37c41e591443b6c843f71978e0eb1569da629a3aa" Feb 16 21:45:00.174157 master-0 kubenswrapper[38936]: I0216 21:45:00.174045 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn"] Feb 16 21:45:00.176335 master-0 kubenswrapper[38936]: I0216 21:45:00.176298 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:00.179785 master-0 kubenswrapper[38936]: I0216 21:45:00.179720 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 21:45:00.179956 master-0 kubenswrapper[38936]: I0216 21:45:00.179738 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-r6wp5" Feb 16 21:45:00.191067 master-0 kubenswrapper[38936]: I0216 21:45:00.190874 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn"] Feb 16 21:45:00.365945 master-0 kubenswrapper[38936]: I0216 21:45:00.365850 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-secret-volume\") pod \"collect-profiles-29521305-zqlbn\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:00.366202 master-0 kubenswrapper[38936]: I0216 21:45:00.366070 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-config-volume\") pod \"collect-profiles-29521305-zqlbn\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:00.366587 master-0 kubenswrapper[38936]: I0216 21:45:00.366567 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q446m\" (UniqueName: \"kubernetes.io/projected/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-kube-api-access-q446m\") pod \"collect-profiles-29521305-zqlbn\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:00.468998 master-0 kubenswrapper[38936]: I0216 21:45:00.468821 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q446m\" (UniqueName: \"kubernetes.io/projected/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-kube-api-access-q446m\") pod \"collect-profiles-29521305-zqlbn\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:00.469222 master-0 kubenswrapper[38936]: I0216 21:45:00.469033 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-secret-volume\") pod \"collect-profiles-29521305-zqlbn\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:00.469281 master-0 kubenswrapper[38936]: I0216 21:45:00.469224 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-config-volume\") pod \"collect-profiles-29521305-zqlbn\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:00.470339 master-0 kubenswrapper[38936]: I0216 21:45:00.470313 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-config-volume\") pod \"collect-profiles-29521305-zqlbn\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:00.474894 master-0 kubenswrapper[38936]: I0216 21:45:00.474814 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-secret-volume\") pod \"collect-profiles-29521305-zqlbn\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:00.486814 master-0 kubenswrapper[38936]: I0216 21:45:00.486717 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q446m\" (UniqueName: \"kubernetes.io/projected/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-kube-api-access-q446m\") pod \"collect-profiles-29521305-zqlbn\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:00.519203 master-0 kubenswrapper[38936]: I0216 21:45:00.519131 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:00.981890 master-0 kubenswrapper[38936]: I0216 21:45:00.981786 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn"] Feb 16 21:45:01.573050 master-0 kubenswrapper[38936]: I0216 21:45:01.572993 38936 generic.go:334] "Generic (PLEG): container finished" podID="0c995dc0-5eb8-49ad-963a-dc5773f5b46d" containerID="6a86114bfe8d8a661b8347920855366b39ca9d09a4e0263a6eb669bb8d436323" exitCode=0 Feb 16 21:45:01.573870 master-0 kubenswrapper[38936]: I0216 21:45:01.573045 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" event={"ID":"0c995dc0-5eb8-49ad-963a-dc5773f5b46d","Type":"ContainerDied","Data":"6a86114bfe8d8a661b8347920855366b39ca9d09a4e0263a6eb669bb8d436323"} Feb 16 21:45:01.573870 master-0 kubenswrapper[38936]: I0216 21:45:01.573104 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" event={"ID":"0c995dc0-5eb8-49ad-963a-dc5773f5b46d","Type":"ContainerStarted","Data":"01ddb2d6f2daaefcca2c0625a86178501fc2827759905198987ba64f1e1f4179"} Feb 16 21:45:03.040610 master-0 kubenswrapper[38936]: I0216 21:45:03.040540 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:03.139293 master-0 kubenswrapper[38936]: I0216 21:45:03.139164 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-config-volume\") pod \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " Feb 16 21:45:03.139576 master-0 kubenswrapper[38936]: I0216 21:45:03.139448 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q446m\" (UniqueName: \"kubernetes.io/projected/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-kube-api-access-q446m\") pod \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " Feb 16 21:45:03.139576 master-0 kubenswrapper[38936]: I0216 21:45:03.139557 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-secret-volume\") pod \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\" (UID: \"0c995dc0-5eb8-49ad-963a-dc5773f5b46d\") " Feb 16 21:45:03.139841 master-0 kubenswrapper[38936]: I0216 21:45:03.139773 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c995dc0-5eb8-49ad-963a-dc5773f5b46d" (UID: "0c995dc0-5eb8-49ad-963a-dc5773f5b46d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 21:45:03.140676 master-0 kubenswrapper[38936]: I0216 21:45:03.140623 38936 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 21:45:03.143325 master-0 kubenswrapper[38936]: I0216 21:45:03.143281 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0c995dc0-5eb8-49ad-963a-dc5773f5b46d" (UID: "0c995dc0-5eb8-49ad-963a-dc5773f5b46d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 21:45:03.146331 master-0 kubenswrapper[38936]: I0216 21:45:03.146275 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-kube-api-access-q446m" (OuterVolumeSpecName: "kube-api-access-q446m") pod "0c995dc0-5eb8-49ad-963a-dc5773f5b46d" (UID: "0c995dc0-5eb8-49ad-963a-dc5773f5b46d"). InnerVolumeSpecName "kube-api-access-q446m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 21:45:03.243674 master-0 kubenswrapper[38936]: I0216 21:45:03.243591 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q446m\" (UniqueName: \"kubernetes.io/projected/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-kube-api-access-q446m\") on node \"master-0\" DevicePath \"\"" Feb 16 21:45:03.243674 master-0 kubenswrapper[38936]: I0216 21:45:03.243636 38936 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c995dc0-5eb8-49ad-963a-dc5773f5b46d-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 21:45:03.597575 master-0 kubenswrapper[38936]: I0216 21:45:03.597503 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" event={"ID":"0c995dc0-5eb8-49ad-963a-dc5773f5b46d","Type":"ContainerDied","Data":"01ddb2d6f2daaefcca2c0625a86178501fc2827759905198987ba64f1e1f4179"} Feb 16 21:45:03.597575 master-0 kubenswrapper[38936]: I0216 21:45:03.597574 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01ddb2d6f2daaefcca2c0625a86178501fc2827759905198987ba64f1e1f4179" Feb 16 21:45:03.597882 master-0 kubenswrapper[38936]: I0216 21:45:03.597549 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn" Feb 16 21:45:04.154348 master-0 kubenswrapper[38936]: I0216 21:45:04.154263 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d"] Feb 16 21:45:04.169281 master-0 kubenswrapper[38936]: I0216 21:45:04.169198 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d"] Feb 16 21:45:05.886779 master-0 kubenswrapper[38936]: I0216 21:45:05.886444 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cc1da27-6eaf-4177-b2d8-1546a9d94f90" path="/var/lib/kubelet/pods/4cc1da27-6eaf-4177-b2d8-1546a9d94f90/volumes" Feb 16 21:45:58.879109 master-0 kubenswrapper[38936]: I0216 21:45:58.879000 38936 scope.go:117] "RemoveContainer" containerID="b5c9ef27352d95c27da1fd4de0d350f8371e4f69cc5b84960004238d748e1ab6" Feb 16 21:45:58.907953 master-0 kubenswrapper[38936]: I0216 21:45:58.907872 38936 scope.go:117] "RemoveContainer" containerID="470d23741df96a01287cde08c9a2859ac687ae00865a4c757a06c718e667e150" Feb 16 21:45:58.962092 master-0 kubenswrapper[38936]: I0216 21:45:58.961920 38936 scope.go:117] "RemoveContainer" containerID="eeec254b0379d43597b407007ab37c7a023f5baf0de9ae47b558dadd37241c75" Feb 16 21:46:36.064327 master-0 kubenswrapper[38936]: I0216 21:46:36.064229 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-kjwf8"] Feb 16 21:46:36.078132 master-0 kubenswrapper[38936]: I0216 21:46:36.078027 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d442-account-create-update-p2dfg"] Feb 16 21:46:36.091350 master-0 kubenswrapper[38936]: I0216 21:46:36.091270 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-r2xtw"] Feb 16 21:46:36.105720 master-0 kubenswrapper[38936]: I0216 21:46:36.105605 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d442-account-create-update-p2dfg"] Feb 16 21:46:36.120330 master-0 kubenswrapper[38936]: I0216 21:46:36.120239 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-r2xtw"] Feb 16 21:46:36.137204 master-0 kubenswrapper[38936]: I0216 21:46:36.137121 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-kjwf8"] Feb 16 21:46:37.044061 master-0 kubenswrapper[38936]: I0216 21:46:37.043971 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-85e2-account-create-update-xh6dm"] Feb 16 21:46:37.057199 master-0 kubenswrapper[38936]: I0216 21:46:37.057083 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-cvnf4"] Feb 16 21:46:37.072052 master-0 kubenswrapper[38936]: I0216 21:46:37.071965 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-48b3-account-create-update-jsqjk"] Feb 16 21:46:37.085928 master-0 kubenswrapper[38936]: I0216 21:46:37.085846 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-85e2-account-create-update-xh6dm"] Feb 16 21:46:37.099130 master-0 kubenswrapper[38936]: I0216 21:46:37.099057 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-48b3-account-create-update-jsqjk"] Feb 16 21:46:37.113181 master-0 kubenswrapper[38936]: I0216 21:46:37.113103 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-cvnf4"] Feb 16 21:46:37.898798 master-0 kubenswrapper[38936]: I0216 21:46:37.897585 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6946c62d-ccec-4c64-bd64-d660f22d7d7a" path="/var/lib/kubelet/pods/6946c62d-ccec-4c64-bd64-d660f22d7d7a/volumes" Feb 16 21:46:37.899399 master-0 kubenswrapper[38936]: I0216 21:46:37.899352 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77af961d-92d8-476f-9df5-91a49b295543" path="/var/lib/kubelet/pods/77af961d-92d8-476f-9df5-91a49b295543/volumes" Feb 16 21:46:37.900619 master-0 kubenswrapper[38936]: I0216 21:46:37.900570 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95733bba-8f7b-460f-8793-e85c3f7066c3" path="/var/lib/kubelet/pods/95733bba-8f7b-460f-8793-e85c3f7066c3/volumes" Feb 16 21:46:37.901434 master-0 kubenswrapper[38936]: I0216 21:46:37.901390 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25b7436-4c82-45df-9e60-d2ceec4544f8" path="/var/lib/kubelet/pods/a25b7436-4c82-45df-9e60-d2ceec4544f8/volumes" Feb 16 21:46:37.903401 master-0 kubenswrapper[38936]: I0216 21:46:37.903135 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39bb875-9e10-497c-b1d1-c7cd8a76a92d" path="/var/lib/kubelet/pods/c39bb875-9e10-497c-b1d1-c7cd8a76a92d/volumes" Feb 16 21:46:37.904069 master-0 kubenswrapper[38936]: I0216 21:46:37.903952 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff2be0c4-7a42-48ef-801c-8b4422008927" path="/var/lib/kubelet/pods/ff2be0c4-7a42-48ef-801c-8b4422008927/volumes" Feb 16 21:46:59.064291 master-0 kubenswrapper[38936]: I0216 21:46:59.064196 38936 scope.go:117] "RemoveContainer" containerID="2da361135b59fea8df18e0415d446a40e11961b7858e0037b42f135271ff399f" Feb 16 21:46:59.122738 master-0 kubenswrapper[38936]: I0216 21:46:59.122611 38936 scope.go:117] "RemoveContainer" containerID="88f43a1d16f1afe0b7c207e1cab890ac3b2c092ce5e187444b4f7b7297a23d89" Feb 16 21:46:59.166823 master-0 kubenswrapper[38936]: I0216 21:46:59.166748 38936 scope.go:117] "RemoveContainer" containerID="f6ab133b1a2730073efb76f7e0263a4ac9f8bd6f9dfeccda0119e22e0ec4fe86" Feb 16 21:46:59.223569 master-0 kubenswrapper[38936]: I0216 21:46:59.223481 38936 scope.go:117] "RemoveContainer" containerID="9afcfe6dcde02181ae5aea5521b38a55eb9c68240f836c1eefa95e31ab8baaaa" Feb 16 21:46:59.250295 master-0 kubenswrapper[38936]: I0216 21:46:59.250224 38936 scope.go:117] "RemoveContainer" containerID="e64159a4c0d0f5fd8f4e525fc4ed277fac207a765258f1cb65f005e31dba5c3b" Feb 16 21:46:59.303675 master-0 kubenswrapper[38936]: I0216 21:46:59.303595 38936 scope.go:117] "RemoveContainer" containerID="cb4fc74793d636ba4f4fe5b44befb93c41dc97582abb6be22555369a5070bf23" Feb 16 21:46:59.358956 master-0 kubenswrapper[38936]: I0216 21:46:59.358920 38936 scope.go:117] "RemoveContainer" containerID="ff03077b5bcbe65962804a18b0183ea96a18af8f6c2af8c6cfd9bd03221680c1" Feb 16 21:46:59.419112 master-0 kubenswrapper[38936]: I0216 21:46:59.418990 38936 scope.go:117] "RemoveContainer" containerID="e49f840d110cc59900dd965558a9f83d1ad7ba3b269e9d6e2f6a2da76900ae29" Feb 16 21:47:00.050050 master-0 kubenswrapper[38936]: I0216 21:47:00.049978 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-w6pqc"] Feb 16 21:47:00.068128 master-0 kubenswrapper[38936]: I0216 21:47:00.068041 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-w6pqc"] Feb 16 21:47:01.887639 master-0 kubenswrapper[38936]: I0216 21:47:01.887599 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775d3c76-9e00-4f01-bf3c-bdb01e653380" path="/var/lib/kubelet/pods/775d3c76-9e00-4f01-bf3c-bdb01e653380/volumes" Feb 16 21:47:05.097342 master-0 kubenswrapper[38936]: I0216 21:47:05.097270 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-gkccd"] Feb 16 21:47:05.135864 master-0 kubenswrapper[38936]: I0216 21:47:05.129619 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-gkccd"] Feb 16 21:47:05.188679 master-0 kubenswrapper[38936]: I0216 21:47:05.180729 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c2ba-account-create-update-x7f7j"] Feb 16 21:47:05.214856 master-0 kubenswrapper[38936]: I0216 21:47:05.211842 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c2ba-account-create-update-x7f7j"] Feb 16 21:47:05.891005 master-0 kubenswrapper[38936]: I0216 21:47:05.890781 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e22cd12-ef73-470e-9543-b328a46c9c0d" path="/var/lib/kubelet/pods/5e22cd12-ef73-470e-9543-b328a46c9c0d/volumes" Feb 16 21:47:05.892152 master-0 kubenswrapper[38936]: I0216 21:47:05.891926 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5598012-498e-44d4-9bb0-8d5aadef2f5b" path="/var/lib/kubelet/pods/c5598012-498e-44d4-9bb0-8d5aadef2f5b/volumes" Feb 16 21:47:09.040828 master-0 kubenswrapper[38936]: I0216 21:47:09.040711 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-m4b9n"] Feb 16 21:47:09.051750 master-0 kubenswrapper[38936]: I0216 21:47:09.051669 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-m4b9n"] Feb 16 21:47:09.072108 master-0 kubenswrapper[38936]: I0216 21:47:09.071169 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5d15-account-create-update-lldsm"] Feb 16 21:47:09.088243 master-0 kubenswrapper[38936]: I0216 21:47:09.088143 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5d15-account-create-update-lldsm"] Feb 16 21:47:09.890054 master-0 kubenswrapper[38936]: I0216 21:47:09.889804 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34fa09a5-23a7-4aea-946f-1005774cd8b8" path="/var/lib/kubelet/pods/34fa09a5-23a7-4aea-946f-1005774cd8b8/volumes" Feb 16 21:47:09.890592 master-0 kubenswrapper[38936]: I0216 21:47:09.890531 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6a2a6fa-653f-4e21-a4b5-09bed56ec48f" path="/var/lib/kubelet/pods/a6a2a6fa-653f-4e21-a4b5-09bed56ec48f/volumes" Feb 16 21:47:10.056783 master-0 kubenswrapper[38936]: I0216 21:47:10.056709 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-hfz86"] Feb 16 21:47:10.071670 master-0 kubenswrapper[38936]: I0216 21:47:10.069347 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-hfz86"] Feb 16 21:47:11.906951 master-0 kubenswrapper[38936]: I0216 21:47:11.906862 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb312b5-f96b-4689-9d30-c4c878aae0ec" path="/var/lib/kubelet/pods/4bb312b5-f96b-4689-9d30-c4c878aae0ec/volumes" Feb 16 21:47:16.047746 master-0 kubenswrapper[38936]: I0216 21:47:16.047641 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vprb4"] Feb 16 21:47:16.059827 master-0 kubenswrapper[38936]: I0216 21:47:16.059757 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vprb4"] Feb 16 21:47:17.892484 master-0 kubenswrapper[38936]: I0216 21:47:17.892405 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03eeee4e-9496-45a9-a3f8-d3a300085c91" path="/var/lib/kubelet/pods/03eeee4e-9496-45a9-a3f8-d3a300085c91/volumes" Feb 16 21:47:22.060555 master-0 kubenswrapper[38936]: I0216 21:47:22.060464 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-whl9t"] Feb 16 21:47:22.073035 master-0 kubenswrapper[38936]: I0216 21:47:22.072618 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-09d0-account-create-update-js9dq"] Feb 16 21:47:22.087357 master-0 kubenswrapper[38936]: I0216 21:47:22.087301 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-whl9t"] Feb 16 21:47:22.101979 master-0 kubenswrapper[38936]: I0216 21:47:22.101894 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-09d0-account-create-update-js9dq"] Feb 16 21:47:23.891449 master-0 kubenswrapper[38936]: I0216 21:47:23.891381 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00b83769-f6e4-4403-aada-d460148bb289" path="/var/lib/kubelet/pods/00b83769-f6e4-4403-aada-d460148bb289/volumes" Feb 16 21:47:23.892190 master-0 kubenswrapper[38936]: I0216 21:47:23.892163 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b54cd98b-c06e-486b-951e-93d5c8477416" path="/var/lib/kubelet/pods/b54cd98b-c06e-486b-951e-93d5c8477416/volumes" Feb 16 21:47:37.047099 master-0 kubenswrapper[38936]: I0216 21:47:37.047042 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-t4jt7"] Feb 16 21:47:37.058595 master-0 kubenswrapper[38936]: I0216 21:47:37.058514 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7xpzq"] Feb 16 21:47:37.072410 master-0 kubenswrapper[38936]: I0216 21:47:37.072340 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-t4jt7"] Feb 16 21:47:37.083398 master-0 kubenswrapper[38936]: I0216 21:47:37.083311 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7xpzq"] Feb 16 21:47:37.889154 master-0 kubenswrapper[38936]: I0216 21:47:37.889072 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b36f972a-247f-43a5-bf98-e27ab216ed04" path="/var/lib/kubelet/pods/b36f972a-247f-43a5-bf98-e27ab216ed04/volumes" Feb 16 21:47:37.889872 master-0 kubenswrapper[38936]: I0216 21:47:37.889849 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca578f4-3095-4770-9cdf-5702cdf8540b" path="/var/lib/kubelet/pods/eca578f4-3095-4770-9cdf-5702cdf8540b/volumes" Feb 16 21:47:51.072148 master-0 kubenswrapper[38936]: I0216 21:47:51.072022 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9c692-db-sync-r9pqq"] Feb 16 21:47:51.103959 master-0 kubenswrapper[38936]: I0216 21:47:51.103892 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9c692-db-sync-r9pqq"] Feb 16 21:47:51.887978 master-0 kubenswrapper[38936]: I0216 21:47:51.887916 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c56a35-a711-40a4-9428-031faf014af4" path="/var/lib/kubelet/pods/a7c56a35-a711-40a4-9428-031faf014af4/volumes" Feb 16 21:47:53.049341 master-0 kubenswrapper[38936]: I0216 21:47:53.049292 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-znszx"] Feb 16 21:47:53.066543 master-0 kubenswrapper[38936]: I0216 21:47:53.066456 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-znszx"] Feb 16 21:47:53.894973 master-0 kubenswrapper[38936]: I0216 21:47:53.894903 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae3f7123-0f56-47f9-afdb-cc6bff73ecd3" path="/var/lib/kubelet/pods/ae3f7123-0f56-47f9-afdb-cc6bff73ecd3/volumes" Feb 16 21:47:59.620308 master-0 kubenswrapper[38936]: I0216 21:47:59.620248 38936 scope.go:117] "RemoveContainer" containerID="c4c041c957179ef05bffa6d57bb9b4a1a610a593b499ac4f975781f3549ff304" Feb 16 21:47:59.652290 master-0 kubenswrapper[38936]: I0216 21:47:59.652252 38936 scope.go:117] "RemoveContainer" containerID="8c8612a802179d4089cd6f0deb7f08c4aba50fad55f84d560869c318f2875b5c" Feb 16 21:47:59.706548 master-0 kubenswrapper[38936]: I0216 21:47:59.706492 38936 scope.go:117] "RemoveContainer" containerID="1ceb6c046b9157abed5facfbbce86e63f2227e4daf0dcec9f1876c743b9e0311" Feb 16 21:47:59.798864 master-0 kubenswrapper[38936]: I0216 21:47:59.798800 38936 scope.go:117] "RemoveContainer" containerID="a04846ba86f6caa45dffd789b062879a0625815568934b5a17569f8eb85e7140" Feb 16 21:47:59.829423 master-0 kubenswrapper[38936]: I0216 21:47:59.829358 38936 scope.go:117] "RemoveContainer" containerID="9444f9aa2cbbf7b0f81da6a5fef4c6aa0d5757d030fe3d677ffbd067dd13ce8e" Feb 16 21:47:59.901789 master-0 kubenswrapper[38936]: I0216 21:47:59.901377 38936 scope.go:117] "RemoveContainer" containerID="9c08e4b6d561a8da44a081325a75ca08b10c3a2bbc3469ff33f0fa9bc6532c38" Feb 16 21:47:59.930561 master-0 kubenswrapper[38936]: I0216 21:47:59.930473 38936 scope.go:117] "RemoveContainer" containerID="8f1e32c71f9fe7c0f457db72104d2cdf117833a851a8986e227468c0679f9099" Feb 16 21:47:59.982715 master-0 kubenswrapper[38936]: I0216 21:47:59.981775 38936 scope.go:117] "RemoveContainer" containerID="eceeec25839a3d4e7215ffc0901f59f65e65dddb4a33cf4900f6cbbe8f4d7b38" Feb 16 21:48:00.021394 master-0 kubenswrapper[38936]: I0216 21:48:00.021325 38936 scope.go:117] "RemoveContainer" containerID="1a03fb329b79a652f93bc0d8cf6903fee3b46a1c62d4b68c751e859b6f865732" Feb 16 21:48:00.079559 master-0 kubenswrapper[38936]: I0216 21:48:00.079479 38936 scope.go:117] "RemoveContainer" containerID="08a59bdb5d4aefa117bebbfc965bdee02d689fff50e2eea2a412b937e493f40f" Feb 16 21:48:00.135808 master-0 kubenswrapper[38936]: I0216 21:48:00.135638 38936 scope.go:117] "RemoveContainer" containerID="d43ab0666a1f12b56a6a3267f24f5b65cb897417e46b4f0d42502fcef3ddd04a" Feb 16 21:48:00.177242 master-0 kubenswrapper[38936]: I0216 21:48:00.177175 38936 scope.go:117] "RemoveContainer" containerID="50071de4534addfdafc6f2ac36fa56a059648fc1a46c2b0d5b91601165f57fcc" Feb 16 21:48:00.222348 master-0 kubenswrapper[38936]: I0216 21:48:00.222264 38936 scope.go:117] "RemoveContainer" containerID="64e1087a03e645001355b579d504384b592bad4233f263992828ae7fadb08054" Feb 16 21:48:02.049759 master-0 kubenswrapper[38936]: I0216 21:48:02.048533 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-sync-nzcsn"] Feb 16 21:48:02.061019 master-0 kubenswrapper[38936]: I0216 21:48:02.060959 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-sync-nzcsn"] Feb 16 21:48:03.890036 master-0 kubenswrapper[38936]: I0216 21:48:03.889981 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b1ea749-0e13-47db-bd37-4f269f872a0b" path="/var/lib/kubelet/pods/5b1ea749-0e13-47db-bd37-4f269f872a0b/volumes" Feb 16 21:48:09.039949 master-0 kubenswrapper[38936]: I0216 21:48:09.039875 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-q98pv"] Feb 16 21:48:09.058180 master-0 kubenswrapper[38936]: I0216 21:48:09.058105 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-q98pv"] Feb 16 21:48:09.887434 master-0 kubenswrapper[38936]: I0216 21:48:09.887375 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ca02e3-b979-4ed3-82e5-ce0850aa85f3" path="/var/lib/kubelet/pods/53ca02e3-b979-4ed3-82e5-ce0850aa85f3/volumes" Feb 16 21:48:11.099672 master-0 kubenswrapper[38936]: I0216 21:48:11.098717 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-1991-account-create-update-vb2d9"] Feb 16 21:48:11.128695 master-0 kubenswrapper[38936]: I0216 21:48:11.127622 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-1991-account-create-update-vb2d9"] Feb 16 21:48:11.905500 master-0 kubenswrapper[38936]: I0216 21:48:11.905416 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fc7857-f230-4a40-8fb6-9b01dd29c502" path="/var/lib/kubelet/pods/f3fc7857-f230-4a40-8fb6-9b01dd29c502/volumes" Feb 16 21:48:35.070335 master-0 kubenswrapper[38936]: I0216 21:48:35.070269 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ded7-account-create-update-dv4vx"] Feb 16 21:48:35.083840 master-0 kubenswrapper[38936]: I0216 21:48:35.083765 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e2a2-account-create-update-t5ggp"] Feb 16 21:48:35.096573 master-0 kubenswrapper[38936]: I0216 21:48:35.096505 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jb9gg"] Feb 16 21:48:35.108341 master-0 kubenswrapper[38936]: I0216 21:48:35.108281 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-fntqx"] Feb 16 21:48:35.120842 master-0 kubenswrapper[38936]: I0216 21:48:35.120797 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e2a2-account-create-update-t5ggp"] Feb 16 21:48:35.133409 master-0 kubenswrapper[38936]: I0216 21:48:35.133348 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ded7-account-create-update-dv4vx"] Feb 16 21:48:35.145148 master-0 kubenswrapper[38936]: I0216 21:48:35.145093 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jb9gg"] Feb 16 21:48:35.158113 master-0 kubenswrapper[38936]: I0216 21:48:35.158017 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-fntqx"] Feb 16 21:48:35.895168 master-0 kubenswrapper[38936]: I0216 21:48:35.893587 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db6a508-ec90-49da-867e-ada0192b7b35" path="/var/lib/kubelet/pods/0db6a508-ec90-49da-867e-ada0192b7b35/volumes" Feb 16 21:48:35.895168 master-0 kubenswrapper[38936]: I0216 21:48:35.894308 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a" path="/var/lib/kubelet/pods/6f4eb13c-847a-4b0f-90dc-2c59cb9c3d3a/volumes" Feb 16 21:48:35.895168 master-0 kubenswrapper[38936]: I0216 21:48:35.894910 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f3ca2c-2ba6-4148-a4e8-843943926a5c" path="/var/lib/kubelet/pods/a7f3ca2c-2ba6-4148-a4e8-843943926a5c/volumes" Feb 16 21:48:35.895878 master-0 kubenswrapper[38936]: I0216 21:48:35.895675 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab575a9-488c-44b1-a7e0-3025fa81207e" path="/var/lib/kubelet/pods/aab575a9-488c-44b1-a7e0-3025fa81207e/volumes" Feb 16 21:48:39.060937 master-0 kubenswrapper[38936]: I0216 21:48:39.060827 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-z4z2j"] Feb 16 21:48:39.076043 master-0 kubenswrapper[38936]: I0216 21:48:39.075941 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b871-account-create-update-96b65"] Feb 16 21:48:39.090491 master-0 kubenswrapper[38936]: I0216 21:48:39.090409 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-sync-87hwd"] Feb 16 21:48:39.103826 master-0 kubenswrapper[38936]: I0216 21:48:39.103766 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b871-account-create-update-96b65"] Feb 16 21:48:39.114533 master-0 kubenswrapper[38936]: I0216 21:48:39.114481 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-z4z2j"] Feb 16 21:48:39.125866 master-0 kubenswrapper[38936]: I0216 21:48:39.125797 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-sync-87hwd"] Feb 16 21:48:39.887677 master-0 kubenswrapper[38936]: I0216 21:48:39.887597 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f" path="/var/lib/kubelet/pods/09702ed3-2ec1-4a3f-9ee3-30137a8b6b7f/volumes" Feb 16 21:48:39.888542 master-0 kubenswrapper[38936]: I0216 21:48:39.888513 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650c4ac6-fc3c-4a97-871d-65c399538b17" path="/var/lib/kubelet/pods/650c4ac6-fc3c-4a97-871d-65c399538b17/volumes" Feb 16 21:48:39.889254 master-0 kubenswrapper[38936]: I0216 21:48:39.889219 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c58d2c-4204-4d3b-9d2a-fdbf35ad8029" path="/var/lib/kubelet/pods/70c58d2c-4204-4d3b-9d2a-fdbf35ad8029/volumes" Feb 16 21:49:00.550496 master-0 kubenswrapper[38936]: I0216 21:49:00.550436 38936 scope.go:117] "RemoveContainer" containerID="d2822b18fa4a8af4c98959626419283766110698c9eaa4a873c43153b1bdfe43" Feb 16 21:49:00.593973 master-0 kubenswrapper[38936]: I0216 21:49:00.593932 38936 scope.go:117] "RemoveContainer" containerID="878599ba466651c7a04306169732c48cd9785ae3fed4557b72a98947e9a87676" Feb 16 21:49:00.640259 master-0 kubenswrapper[38936]: I0216 21:49:00.640043 38936 scope.go:117] "RemoveContainer" containerID="8eacb69ce9cfc0c612bb68d0b19df03049b3b02dc559ababa03f91b997ecfcdb" Feb 16 21:49:00.695202 master-0 kubenswrapper[38936]: I0216 21:49:00.695158 38936 scope.go:117] "RemoveContainer" containerID="668d2c640bf4c89454b88017e17a004ac5e14dc8cd9345121af2bde3b20fbdda" Feb 16 21:49:00.754685 master-0 kubenswrapper[38936]: I0216 21:49:00.754585 38936 scope.go:117] "RemoveContainer" containerID="8dada93288a6f6e4e101eea3a25ec2ea4fbe001e2f7d8af4b80dd2c92da4815e" Feb 16 21:49:00.819954 master-0 kubenswrapper[38936]: I0216 21:49:00.819864 38936 scope.go:117] "RemoveContainer" containerID="71309986699e8d944d3ba16db4ba84da61f3cdee13e24b2d158b46d770092237" Feb 16 21:49:00.853667 master-0 kubenswrapper[38936]: I0216 21:49:00.853551 38936 scope.go:117] "RemoveContainer" containerID="1d34449ffd2482532e52f3621f11fbd435dae6703fb7224f529fcf752ed7e7bb" Feb 16 21:49:00.882464 master-0 kubenswrapper[38936]: I0216 21:49:00.882334 38936 scope.go:117] "RemoveContainer" containerID="10a0a0181f28b207b613197e92cb8c759326c2803ec45064ba44ed084d153b2e" Feb 16 21:49:00.908738 master-0 kubenswrapper[38936]: I0216 21:49:00.908692 38936 scope.go:117] "RemoveContainer" containerID="c1877eb7455255efbc803c552bf739892007e1d5651f37af1c6bdddd3a9edd33" Feb 16 21:49:00.942545 master-0 kubenswrapper[38936]: I0216 21:49:00.942506 38936 scope.go:117] "RemoveContainer" containerID="a22ada46eda3717e4eb1f7a11a86e0b28b36147c28ba992710d1837b15423542" Feb 16 21:49:00.973812 master-0 kubenswrapper[38936]: I0216 21:49:00.973755 38936 scope.go:117] "RemoveContainer" containerID="19121314053b1bf064fccd4d8d9f4cbe141490f9bc0946e122e0dafd6025290a" Feb 16 21:49:15.069817 master-0 kubenswrapper[38936]: I0216 21:49:15.069727 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jjlmc"] Feb 16 21:49:15.079683 master-0 kubenswrapper[38936]: I0216 21:49:15.079597 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jjlmc"] Feb 16 21:49:15.889101 master-0 kubenswrapper[38936]: I0216 21:49:15.889043 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e0cbb0a-133a-421f-9c54-a473c5446028" path="/var/lib/kubelet/pods/8e0cbb0a-133a-421f-9c54-a473c5446028/volumes" Feb 16 21:49:42.058566 master-0 kubenswrapper[38936]: I0216 21:49:42.058490 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-d25bz"] Feb 16 21:49:42.071845 master-0 kubenswrapper[38936]: I0216 21:49:42.071768 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-d25bz"] Feb 16 21:49:43.896143 master-0 kubenswrapper[38936]: I0216 21:49:43.896072 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26172c1-371c-4d1d-b026-80e4ebe31568" path="/var/lib/kubelet/pods/f26172c1-371c-4d1d-b026-80e4ebe31568/volumes" Feb 16 21:49:45.053369 master-0 kubenswrapper[38936]: I0216 21:49:45.053302 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5vr4r"] Feb 16 21:49:45.080840 master-0 kubenswrapper[38936]: I0216 21:49:45.080728 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5vr4r"] Feb 16 21:49:45.898419 master-0 kubenswrapper[38936]: I0216 21:49:45.898361 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f308b178-bf97-40ee-8754-fd2a13d6242f" path="/var/lib/kubelet/pods/f308b178-bf97-40ee-8754-fd2a13d6242f/volumes" Feb 16 21:50:01.213474 master-0 kubenswrapper[38936]: I0216 21:50:01.213376 38936 scope.go:117] "RemoveContainer" containerID="134d4e20b6a1df2ce0594b11035592abf9bc606635db583ad369f02d00637e65" Feb 16 21:50:01.264797 master-0 kubenswrapper[38936]: I0216 21:50:01.264745 38936 scope.go:117] "RemoveContainer" containerID="bdde91efde1aac6af16fd60c7779ae5510c955ab1d3e2b9db89dbd4851607e51" Feb 16 21:50:01.342753 master-0 kubenswrapper[38936]: I0216 21:50:01.338768 38936 scope.go:117] "RemoveContainer" containerID="704805fe4527ea953cd8c5a9a4770e2135573ed97401e05ab821317c983f4869" Feb 16 21:50:16.465394 master-0 kubenswrapper[38936]: I0216 21:50:16.465303 38936 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7fd65686d6-7ht5b" podUID="ef66440d-1b5d-4de9-a1c0-05f4def18451" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 21:50:22.066335 master-0 kubenswrapper[38936]: I0216 21:50:22.066272 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-host-discover-wrm7p"] Feb 16 21:50:22.081520 master-0 kubenswrapper[38936]: I0216 21:50:22.081459 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-host-discover-wrm7p"] Feb 16 21:50:23.887779 master-0 kubenswrapper[38936]: I0216 21:50:23.887695 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbf44af2-ce58-40e0-af03-ee3c9bcc1519" path="/var/lib/kubelet/pods/fbf44af2-ce58-40e0-af03-ee3c9bcc1519/volumes" Feb 16 21:50:24.050804 master-0 kubenswrapper[38936]: I0216 21:50:24.050748 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-p7jjg"] Feb 16 21:50:24.066161 master-0 kubenswrapper[38936]: I0216 21:50:24.066098 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-p7jjg"] Feb 16 21:50:25.887296 master-0 kubenswrapper[38936]: I0216 21:50:25.887192 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca15ca0-b308-47ca-ad27-856b6b2d928e" path="/var/lib/kubelet/pods/bca15ca0-b308-47ca-ad27-856b6b2d928e/volumes" Feb 16 21:51:01.463222 master-0 kubenswrapper[38936]: I0216 21:51:01.463146 38936 scope.go:117] "RemoveContainer" containerID="0c53d56f9145b098b1c210595b9ca11f3176a6b5489ef97355a44bd43ab3f516" Feb 16 21:51:01.520082 master-0 kubenswrapper[38936]: I0216 21:51:01.520016 38936 scope.go:117] "RemoveContainer" containerID="77a439020c0ea518cdeba9c691d3498079fef606e6c93dc5eae9ccc4543722c3" Feb 16 22:00:00.174708 master-0 kubenswrapper[38936]: I0216 22:00:00.174608 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r"] Feb 16 22:00:00.175795 master-0 kubenswrapper[38936]: E0216 22:00:00.175240 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c995dc0-5eb8-49ad-963a-dc5773f5b46d" containerName="collect-profiles" Feb 16 22:00:00.175795 master-0 kubenswrapper[38936]: I0216 22:00:00.175257 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c995dc0-5eb8-49ad-963a-dc5773f5b46d" containerName="collect-profiles" Feb 16 22:00:00.175795 master-0 kubenswrapper[38936]: I0216 22:00:00.175500 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c995dc0-5eb8-49ad-963a-dc5773f5b46d" containerName="collect-profiles" Feb 16 22:00:00.176490 master-0 kubenswrapper[38936]: I0216 22:00:00.176449 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:00.181723 master-0 kubenswrapper[38936]: I0216 22:00:00.181642 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-r6wp5" Feb 16 22:00:00.181852 master-0 kubenswrapper[38936]: I0216 22:00:00.181681 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 22:00:00.202853 master-0 kubenswrapper[38936]: I0216 22:00:00.202712 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r"] Feb 16 22:00:00.246046 master-0 kubenswrapper[38936]: I0216 22:00:00.245967 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53621e7b-0776-400d-8e51-4c16110fd990-config-volume\") pod \"collect-profiles-29521320-tvm5r\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:00.246046 master-0 kubenswrapper[38936]: I0216 22:00:00.246033 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9n8\" (UniqueName: \"kubernetes.io/projected/53621e7b-0776-400d-8e51-4c16110fd990-kube-api-access-4n9n8\") pod \"collect-profiles-29521320-tvm5r\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:00.246547 master-0 kubenswrapper[38936]: I0216 22:00:00.246468 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53621e7b-0776-400d-8e51-4c16110fd990-secret-volume\") pod \"collect-profiles-29521320-tvm5r\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:00.347918 master-0 kubenswrapper[38936]: I0216 22:00:00.347839 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53621e7b-0776-400d-8e51-4c16110fd990-secret-volume\") pod \"collect-profiles-29521320-tvm5r\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:00.348200 master-0 kubenswrapper[38936]: I0216 22:00:00.348039 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53621e7b-0776-400d-8e51-4c16110fd990-config-volume\") pod \"collect-profiles-29521320-tvm5r\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:00.348339 master-0 kubenswrapper[38936]: I0216 22:00:00.348264 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9n8\" (UniqueName: \"kubernetes.io/projected/53621e7b-0776-400d-8e51-4c16110fd990-kube-api-access-4n9n8\") pod \"collect-profiles-29521320-tvm5r\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:00.349113 master-0 kubenswrapper[38936]: I0216 22:00:00.349075 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53621e7b-0776-400d-8e51-4c16110fd990-config-volume\") pod \"collect-profiles-29521320-tvm5r\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:00.353042 master-0 kubenswrapper[38936]: I0216 22:00:00.352984 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53621e7b-0776-400d-8e51-4c16110fd990-secret-volume\") pod \"collect-profiles-29521320-tvm5r\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:00.367386 master-0 kubenswrapper[38936]: I0216 22:00:00.367327 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9n8\" (UniqueName: \"kubernetes.io/projected/53621e7b-0776-400d-8e51-4c16110fd990-kube-api-access-4n9n8\") pod \"collect-profiles-29521320-tvm5r\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:00.524794 master-0 kubenswrapper[38936]: I0216 22:00:00.524720 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:01.008536 master-0 kubenswrapper[38936]: I0216 22:00:01.008454 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r"] Feb 16 22:00:01.360783 master-0 kubenswrapper[38936]: I0216 22:00:01.360584 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" event={"ID":"53621e7b-0776-400d-8e51-4c16110fd990","Type":"ContainerStarted","Data":"d4143a612e524d9c137d981be16b1fd45e87d5a3b71703d24b4828bf4d9a9c5f"} Feb 16 22:00:01.360783 master-0 kubenswrapper[38936]: I0216 22:00:01.360666 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" event={"ID":"53621e7b-0776-400d-8e51-4c16110fd990","Type":"ContainerStarted","Data":"ea89d37807fa8d69e0821e8ec20644bf538d89ea1b04a4c0d326d80189ed1706"} Feb 16 22:00:01.400247 master-0 kubenswrapper[38936]: I0216 22:00:01.400136 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" podStartSLOduration=1.400112862 podStartE2EDuration="1.400112862s" podCreationTimestamp="2026-02-16 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:00:01.383090441 +0000 UTC m=+2231.735093803" watchObservedRunningTime="2026-02-16 22:00:01.400112862 +0000 UTC m=+2231.752116224" Feb 16 22:00:02.371737 master-0 kubenswrapper[38936]: I0216 22:00:02.371680 38936 generic.go:334] "Generic (PLEG): container finished" podID="53621e7b-0776-400d-8e51-4c16110fd990" containerID="d4143a612e524d9c137d981be16b1fd45e87d5a3b71703d24b4828bf4d9a9c5f" exitCode=0 Feb 16 22:00:02.371737 master-0 kubenswrapper[38936]: I0216 22:00:02.371728 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" event={"ID":"53621e7b-0776-400d-8e51-4c16110fd990","Type":"ContainerDied","Data":"d4143a612e524d9c137d981be16b1fd45e87d5a3b71703d24b4828bf4d9a9c5f"} Feb 16 22:00:03.805890 master-0 kubenswrapper[38936]: I0216 22:00:03.805843 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:03.869027 master-0 kubenswrapper[38936]: I0216 22:00:03.868950 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53621e7b-0776-400d-8e51-4c16110fd990-secret-volume\") pod \"53621e7b-0776-400d-8e51-4c16110fd990\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " Feb 16 22:00:03.869268 master-0 kubenswrapper[38936]: I0216 22:00:03.869166 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n9n8\" (UniqueName: \"kubernetes.io/projected/53621e7b-0776-400d-8e51-4c16110fd990-kube-api-access-4n9n8\") pod \"53621e7b-0776-400d-8e51-4c16110fd990\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " Feb 16 22:00:03.869485 master-0 kubenswrapper[38936]: I0216 22:00:03.869440 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53621e7b-0776-400d-8e51-4c16110fd990-config-volume\") pod \"53621e7b-0776-400d-8e51-4c16110fd990\" (UID: \"53621e7b-0776-400d-8e51-4c16110fd990\") " Feb 16 22:00:03.870034 master-0 kubenswrapper[38936]: I0216 22:00:03.869982 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53621e7b-0776-400d-8e51-4c16110fd990-config-volume" (OuterVolumeSpecName: "config-volume") pod "53621e7b-0776-400d-8e51-4c16110fd990" (UID: "53621e7b-0776-400d-8e51-4c16110fd990"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:00:03.870846 master-0 kubenswrapper[38936]: I0216 22:00:03.870822 38936 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53621e7b-0776-400d-8e51-4c16110fd990-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 22:00:03.872219 master-0 kubenswrapper[38936]: I0216 22:00:03.872165 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53621e7b-0776-400d-8e51-4c16110fd990-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "53621e7b-0776-400d-8e51-4c16110fd990" (UID: "53621e7b-0776-400d-8e51-4c16110fd990"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:00:03.873274 master-0 kubenswrapper[38936]: I0216 22:00:03.873219 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53621e7b-0776-400d-8e51-4c16110fd990-kube-api-access-4n9n8" (OuterVolumeSpecName: "kube-api-access-4n9n8") pod "53621e7b-0776-400d-8e51-4c16110fd990" (UID: "53621e7b-0776-400d-8e51-4c16110fd990"). InnerVolumeSpecName "kube-api-access-4n9n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:00:03.973410 master-0 kubenswrapper[38936]: I0216 22:00:03.973273 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n9n8\" (UniqueName: \"kubernetes.io/projected/53621e7b-0776-400d-8e51-4c16110fd990-kube-api-access-4n9n8\") on node \"master-0\" DevicePath \"\"" Feb 16 22:00:03.973410 master-0 kubenswrapper[38936]: I0216 22:00:03.973324 38936 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53621e7b-0776-400d-8e51-4c16110fd990-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 22:00:04.398271 master-0 kubenswrapper[38936]: I0216 22:00:04.398189 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" event={"ID":"53621e7b-0776-400d-8e51-4c16110fd990","Type":"ContainerDied","Data":"ea89d37807fa8d69e0821e8ec20644bf538d89ea1b04a4c0d326d80189ed1706"} Feb 16 22:00:04.398271 master-0 kubenswrapper[38936]: I0216 22:00:04.398268 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea89d37807fa8d69e0821e8ec20644bf538d89ea1b04a4c0d326d80189ed1706" Feb 16 22:00:04.398612 master-0 kubenswrapper[38936]: I0216 22:00:04.398419 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r" Feb 16 22:00:04.490680 master-0 kubenswrapper[38936]: I0216 22:00:04.489190 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b"] Feb 16 22:00:04.501150 master-0 kubenswrapper[38936]: I0216 22:00:04.501086 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b"] Feb 16 22:00:05.893263 master-0 kubenswrapper[38936]: I0216 22:00:05.893203 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebeb6876-0438-4961-a62a-68b41a676f17" path="/var/lib/kubelet/pods/ebeb6876-0438-4961-a62a-68b41a676f17/volumes" Feb 16 22:01:00.173678 master-0 kubenswrapper[38936]: I0216 22:01:00.172727 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29521321-rp4hh"] Feb 16 22:01:00.174301 master-0 kubenswrapper[38936]: E0216 22:01:00.173694 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53621e7b-0776-400d-8e51-4c16110fd990" containerName="collect-profiles" Feb 16 22:01:00.174301 master-0 kubenswrapper[38936]: I0216 22:01:00.173713 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="53621e7b-0776-400d-8e51-4c16110fd990" containerName="collect-profiles" Feb 16 22:01:00.174301 master-0 kubenswrapper[38936]: I0216 22:01:00.174094 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="53621e7b-0776-400d-8e51-4c16110fd990" containerName="collect-profiles" Feb 16 22:01:00.177339 master-0 kubenswrapper[38936]: I0216 22:01:00.175245 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.187672 master-0 kubenswrapper[38936]: I0216 22:01:00.186023 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521321-rp4hh"] Feb 16 22:01:00.275884 master-0 kubenswrapper[38936]: I0216 22:01:00.275807 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-combined-ca-bundle\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.275884 master-0 kubenswrapper[38936]: I0216 22:01:00.275881 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-config-data\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.276194 master-0 kubenswrapper[38936]: I0216 22:01:00.276051 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-fernet-keys\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.276236 master-0 kubenswrapper[38936]: I0216 22:01:00.276167 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmg8\" (UniqueName: \"kubernetes.io/projected/d06cb746-245a-46d5-9411-484a88ac9ab3-kube-api-access-4fmg8\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.379321 master-0 kubenswrapper[38936]: I0216 22:01:00.379228 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmg8\" (UniqueName: \"kubernetes.io/projected/d06cb746-245a-46d5-9411-484a88ac9ab3-kube-api-access-4fmg8\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.379585 master-0 kubenswrapper[38936]: I0216 22:01:00.379459 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-combined-ca-bundle\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.379585 master-0 kubenswrapper[38936]: I0216 22:01:00.379485 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-config-data\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.379724 master-0 kubenswrapper[38936]: I0216 22:01:00.379601 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-fernet-keys\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.383786 master-0 kubenswrapper[38936]: I0216 22:01:00.383734 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-combined-ca-bundle\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.384508 master-0 kubenswrapper[38936]: I0216 22:01:00.384441 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-config-data\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.384834 master-0 kubenswrapper[38936]: I0216 22:01:00.384796 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-fernet-keys\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.396149 master-0 kubenswrapper[38936]: I0216 22:01:00.396097 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmg8\" (UniqueName: \"kubernetes.io/projected/d06cb746-245a-46d5-9411-484a88ac9ab3-kube-api-access-4fmg8\") pod \"keystone-cron-29521321-rp4hh\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:00.506507 master-0 kubenswrapper[38936]: I0216 22:01:00.506415 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:01.018353 master-0 kubenswrapper[38936]: I0216 22:01:01.018289 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521321-rp4hh"] Feb 16 22:01:01.021255 master-0 kubenswrapper[38936]: W0216 22:01:01.021198 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd06cb746_245a_46d5_9411_484a88ac9ab3.slice/crio-d69985e0763e1f712a1c1880d049741140f7ed69d4c4eabad113eb52b1aa2f42 WatchSource:0}: Error finding container d69985e0763e1f712a1c1880d049741140f7ed69d4c4eabad113eb52b1aa2f42: Status 404 returned error can't find the container with id d69985e0763e1f712a1c1880d049741140f7ed69d4c4eabad113eb52b1aa2f42 Feb 16 22:01:01.112037 master-0 kubenswrapper[38936]: I0216 22:01:01.111952 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-rp4hh" event={"ID":"d06cb746-245a-46d5-9411-484a88ac9ab3","Type":"ContainerStarted","Data":"d69985e0763e1f712a1c1880d049741140f7ed69d4c4eabad113eb52b1aa2f42"} Feb 16 22:01:01.839694 master-0 kubenswrapper[38936]: I0216 22:01:01.839608 38936 scope.go:117] "RemoveContainer" containerID="ba4091698915c4aa641aec2c8b4b82e0a58aec68f9f33e7955121f8e822a443d" Feb 16 22:01:02.128288 master-0 kubenswrapper[38936]: I0216 22:01:02.128111 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-rp4hh" event={"ID":"d06cb746-245a-46d5-9411-484a88ac9ab3","Type":"ContainerStarted","Data":"0ad28c19822a5be91642e9992459ce6a141ce122ebc21c80b6973aec8ff2bd6b"} Feb 16 22:01:02.150522 master-0 kubenswrapper[38936]: I0216 22:01:02.150416 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29521321-rp4hh" podStartSLOduration=2.150391084 podStartE2EDuration="2.150391084s" podCreationTimestamp="2026-02-16 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 22:01:02.143866628 +0000 UTC m=+2292.495869990" watchObservedRunningTime="2026-02-16 22:01:02.150391084 +0000 UTC m=+2292.502394486" Feb 16 22:01:04.155356 master-0 kubenswrapper[38936]: I0216 22:01:04.155263 38936 generic.go:334] "Generic (PLEG): container finished" podID="d06cb746-245a-46d5-9411-484a88ac9ab3" containerID="0ad28c19822a5be91642e9992459ce6a141ce122ebc21c80b6973aec8ff2bd6b" exitCode=0 Feb 16 22:01:04.155356 master-0 kubenswrapper[38936]: I0216 22:01:04.155340 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-rp4hh" event={"ID":"d06cb746-245a-46d5-9411-484a88ac9ab3","Type":"ContainerDied","Data":"0ad28c19822a5be91642e9992459ce6a141ce122ebc21c80b6973aec8ff2bd6b"} Feb 16 22:01:05.692218 master-0 kubenswrapper[38936]: I0216 22:01:05.692149 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:01:05.858221 master-0 kubenswrapper[38936]: I0216 22:01:05.858023 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-combined-ca-bundle\") pod \"d06cb746-245a-46d5-9411-484a88ac9ab3\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " Feb 16 22:01:05.858458 master-0 kubenswrapper[38936]: I0216 22:01:05.858232 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-fernet-keys\") pod \"d06cb746-245a-46d5-9411-484a88ac9ab3\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " Feb 16 22:01:05.858458 master-0 kubenswrapper[38936]: I0216 22:01:05.858447 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fmg8\" (UniqueName: \"kubernetes.io/projected/d06cb746-245a-46d5-9411-484a88ac9ab3-kube-api-access-4fmg8\") pod \"d06cb746-245a-46d5-9411-484a88ac9ab3\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " Feb 16 22:01:05.858556 master-0 kubenswrapper[38936]: I0216 22:01:05.858498 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-config-data\") pod \"d06cb746-245a-46d5-9411-484a88ac9ab3\" (UID: \"d06cb746-245a-46d5-9411-484a88ac9ab3\") " Feb 16 22:01:05.861760 master-0 kubenswrapper[38936]: I0216 22:01:05.861716 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d06cb746-245a-46d5-9411-484a88ac9ab3" (UID: "d06cb746-245a-46d5-9411-484a88ac9ab3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:05.862098 master-0 kubenswrapper[38936]: I0216 22:01:05.862029 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d06cb746-245a-46d5-9411-484a88ac9ab3-kube-api-access-4fmg8" (OuterVolumeSpecName: "kube-api-access-4fmg8") pod "d06cb746-245a-46d5-9411-484a88ac9ab3" (UID: "d06cb746-245a-46d5-9411-484a88ac9ab3"). InnerVolumeSpecName "kube-api-access-4fmg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:01:05.888578 master-0 kubenswrapper[38936]: I0216 22:01:05.888512 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d06cb746-245a-46d5-9411-484a88ac9ab3" (UID: "d06cb746-245a-46d5-9411-484a88ac9ab3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:05.920372 master-0 kubenswrapper[38936]: I0216 22:01:05.920310 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-config-data" (OuterVolumeSpecName: "config-data") pod "d06cb746-245a-46d5-9411-484a88ac9ab3" (UID: "d06cb746-245a-46d5-9411-484a88ac9ab3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:01:05.961773 master-0 kubenswrapper[38936]: I0216 22:01:05.961698 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fmg8\" (UniqueName: \"kubernetes.io/projected/d06cb746-245a-46d5-9411-484a88ac9ab3-kube-api-access-4fmg8\") on node \"master-0\" DevicePath \"\"" Feb 16 22:01:05.961773 master-0 kubenswrapper[38936]: I0216 22:01:05.961769 38936 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 22:01:05.962053 master-0 kubenswrapper[38936]: I0216 22:01:05.961935 38936 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 22:01:05.962053 master-0 kubenswrapper[38936]: I0216 22:01:05.961957 38936 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d06cb746-245a-46d5-9411-484a88ac9ab3-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 22:01:06.178871 master-0 kubenswrapper[38936]: I0216 22:01:06.178705 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521321-rp4hh" event={"ID":"d06cb746-245a-46d5-9411-484a88ac9ab3","Type":"ContainerDied","Data":"d69985e0763e1f712a1c1880d049741140f7ed69d4c4eabad113eb52b1aa2f42"} Feb 16 22:01:06.178871 master-0 kubenswrapper[38936]: I0216 22:01:06.178759 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69985e0763e1f712a1c1880d049741140f7ed69d4c4eabad113eb52b1aa2f42" Feb 16 22:01:06.178871 master-0 kubenswrapper[38936]: I0216 22:01:06.178819 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521321-rp4hh" Feb 16 22:15:00.176914 master-0 kubenswrapper[38936]: I0216 22:15:00.176837 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4"] Feb 16 22:15:00.177727 master-0 kubenswrapper[38936]: E0216 22:15:00.177595 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d06cb746-245a-46d5-9411-484a88ac9ab3" containerName="keystone-cron" Feb 16 22:15:00.177727 master-0 kubenswrapper[38936]: I0216 22:15:00.177617 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="d06cb746-245a-46d5-9411-484a88ac9ab3" containerName="keystone-cron" Feb 16 22:15:00.178014 master-0 kubenswrapper[38936]: I0216 22:15:00.177985 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="d06cb746-245a-46d5-9411-484a88ac9ab3" containerName="keystone-cron" Feb 16 22:15:00.178931 master-0 kubenswrapper[38936]: I0216 22:15:00.178902 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:00.181446 master-0 kubenswrapper[38936]: I0216 22:15:00.181394 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 22:15:00.181726 master-0 kubenswrapper[38936]: I0216 22:15:00.181673 38936 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-r6wp5" Feb 16 22:15:00.198749 master-0 kubenswrapper[38936]: I0216 22:15:00.198248 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4"] Feb 16 22:15:00.282406 master-0 kubenswrapper[38936]: I0216 22:15:00.281837 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-secret-volume\") pod \"collect-profiles-29521335-9hgk4\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:00.282406 master-0 kubenswrapper[38936]: I0216 22:15:00.282285 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-config-volume\") pod \"collect-profiles-29521335-9hgk4\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:00.283368 master-0 kubenswrapper[38936]: I0216 22:15:00.283146 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmbvw\" (UniqueName: \"kubernetes.io/projected/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-kube-api-access-xmbvw\") pod \"collect-profiles-29521335-9hgk4\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:00.386463 master-0 kubenswrapper[38936]: I0216 22:15:00.386382 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmbvw\" (UniqueName: \"kubernetes.io/projected/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-kube-api-access-xmbvw\") pod \"collect-profiles-29521335-9hgk4\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:00.386791 master-0 kubenswrapper[38936]: I0216 22:15:00.386475 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-secret-volume\") pod \"collect-profiles-29521335-9hgk4\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:00.386791 master-0 kubenswrapper[38936]: I0216 22:15:00.386554 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-config-volume\") pod \"collect-profiles-29521335-9hgk4\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:00.387797 master-0 kubenswrapper[38936]: I0216 22:15:00.387743 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-config-volume\") pod \"collect-profiles-29521335-9hgk4\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:00.390906 master-0 kubenswrapper[38936]: I0216 22:15:00.390844 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-secret-volume\") pod \"collect-profiles-29521335-9hgk4\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:00.403639 master-0 kubenswrapper[38936]: I0216 22:15:00.403581 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmbvw\" (UniqueName: \"kubernetes.io/projected/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-kube-api-access-xmbvw\") pod \"collect-profiles-29521335-9hgk4\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:00.504698 master-0 kubenswrapper[38936]: I0216 22:15:00.504624 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:00.966273 master-0 kubenswrapper[38936]: W0216 22:15:00.966211 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e600758_ada7_4e1e_a5c9_d0e3758dc2a8.slice/crio-063e7de3a29082571c142725b76014f31c9497bd130c30ece37c7ce28e9aa786 WatchSource:0}: Error finding container 063e7de3a29082571c142725b76014f31c9497bd130c30ece37c7ce28e9aa786: Status 404 returned error can't find the container with id 063e7de3a29082571c142725b76014f31c9497bd130c30ece37c7ce28e9aa786 Feb 16 22:15:00.970630 master-0 kubenswrapper[38936]: I0216 22:15:00.970568 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4"] Feb 16 22:15:01.737642 master-0 kubenswrapper[38936]: I0216 22:15:01.737424 38936 generic.go:334] "Generic (PLEG): container finished" podID="5e600758-ada7-4e1e-a5c9-d0e3758dc2a8" containerID="9d016d08985e4f08e77544fd908e9f543309c1732ad84698570ecf5c9ab6750a" exitCode=0 Feb 16 22:15:01.737642 master-0 kubenswrapper[38936]: I0216 22:15:01.737488 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" event={"ID":"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8","Type":"ContainerDied","Data":"9d016d08985e4f08e77544fd908e9f543309c1732ad84698570ecf5c9ab6750a"} Feb 16 22:15:01.737642 master-0 kubenswrapper[38936]: I0216 22:15:01.737516 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" event={"ID":"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8","Type":"ContainerStarted","Data":"063e7de3a29082571c142725b76014f31c9497bd130c30ece37c7ce28e9aa786"} Feb 16 22:15:03.200851 master-0 kubenswrapper[38936]: I0216 22:15:03.200785 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:03.262198 master-0 kubenswrapper[38936]: I0216 22:15:03.259943 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-secret-volume\") pod \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " Feb 16 22:15:03.262198 master-0 kubenswrapper[38936]: I0216 22:15:03.260035 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmbvw\" (UniqueName: \"kubernetes.io/projected/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-kube-api-access-xmbvw\") pod \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " Feb 16 22:15:03.262198 master-0 kubenswrapper[38936]: I0216 22:15:03.260131 38936 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-config-volume\") pod \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\" (UID: \"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8\") " Feb 16 22:15:03.262198 master-0 kubenswrapper[38936]: I0216 22:15:03.261103 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-config-volume" (OuterVolumeSpecName: "config-volume") pod "5e600758-ada7-4e1e-a5c9-d0e3758dc2a8" (UID: "5e600758-ada7-4e1e-a5c9-d0e3758dc2a8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 22:15:03.263961 master-0 kubenswrapper[38936]: I0216 22:15:03.263899 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5e600758-ada7-4e1e-a5c9-d0e3758dc2a8" (UID: "5e600758-ada7-4e1e-a5c9-d0e3758dc2a8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 22:15:03.266479 master-0 kubenswrapper[38936]: I0216 22:15:03.266425 38936 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-kube-api-access-xmbvw" (OuterVolumeSpecName: "kube-api-access-xmbvw") pod "5e600758-ada7-4e1e-a5c9-d0e3758dc2a8" (UID: "5e600758-ada7-4e1e-a5c9-d0e3758dc2a8"). InnerVolumeSpecName "kube-api-access-xmbvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 22:15:03.362047 master-0 kubenswrapper[38936]: I0216 22:15:03.361860 38936 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 22:15:03.362047 master-0 kubenswrapper[38936]: I0216 22:15:03.361905 38936 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmbvw\" (UniqueName: \"kubernetes.io/projected/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-kube-api-access-xmbvw\") on node \"master-0\" DevicePath \"\"" Feb 16 22:15:03.362047 master-0 kubenswrapper[38936]: I0216 22:15:03.361918 38936 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e600758-ada7-4e1e-a5c9-d0e3758dc2a8-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 22:15:03.770619 master-0 kubenswrapper[38936]: I0216 22:15:03.770524 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" event={"ID":"5e600758-ada7-4e1e-a5c9-d0e3758dc2a8","Type":"ContainerDied","Data":"063e7de3a29082571c142725b76014f31c9497bd130c30ece37c7ce28e9aa786"} Feb 16 22:15:03.770619 master-0 kubenswrapper[38936]: I0216 22:15:03.770600 38936 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="063e7de3a29082571c142725b76014f31c9497bd130c30ece37c7ce28e9aa786" Feb 16 22:15:03.770923 master-0 kubenswrapper[38936]: I0216 22:15:03.770610 38936 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4" Feb 16 22:15:04.313934 master-0 kubenswrapper[38936]: I0216 22:15:04.313828 38936 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4"] Feb 16 22:15:04.335423 master-0 kubenswrapper[38936]: I0216 22:15:04.335362 38936 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4"] Feb 16 22:15:05.885806 master-0 kubenswrapper[38936]: I0216 22:15:05.885734 38936 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a1c7d4-4d65-4047-b972-d85cce98fe48" path="/var/lib/kubelet/pods/24a1c7d4-4d65-4047-b972-d85cce98fe48/volumes" Feb 16 22:16:02.226225 master-0 kubenswrapper[38936]: I0216 22:16:02.226132 38936 scope.go:117] "RemoveContainer" containerID="51c2317c24ff00faccacb193244105e3ec64f883868aa13130510e611024da6e" Feb 16 22:16:10.378066 master-0 kubenswrapper[38936]: I0216 22:16:10.377993 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d6xvl/must-gather-jfmhs"] Feb 16 22:16:10.378834 master-0 kubenswrapper[38936]: E0216 22:16:10.378724 38936 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e600758-ada7-4e1e-a5c9-d0e3758dc2a8" containerName="collect-profiles" Feb 16 22:16:10.378834 master-0 kubenswrapper[38936]: I0216 22:16:10.378740 38936 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e600758-ada7-4e1e-a5c9-d0e3758dc2a8" containerName="collect-profiles" Feb 16 22:16:10.379114 master-0 kubenswrapper[38936]: I0216 22:16:10.379090 38936 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e600758-ada7-4e1e-a5c9-d0e3758dc2a8" containerName="collect-profiles" Feb 16 22:16:10.380451 master-0 kubenswrapper[38936]: I0216 22:16:10.380422 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d6xvl/must-gather-jfmhs" Feb 16 22:16:10.382552 master-0 kubenswrapper[38936]: I0216 22:16:10.382516 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d6xvl"/"openshift-service-ca.crt" Feb 16 22:16:10.382841 master-0 kubenswrapper[38936]: I0216 22:16:10.382790 38936 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d6xvl"/"kube-root-ca.crt" Feb 16 22:16:10.383897 master-0 kubenswrapper[38936]: I0216 22:16:10.383854 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5dt8\" (UniqueName: \"kubernetes.io/projected/3dccd013-a61a-4838-bbea-745698e4cdf1-kube-api-access-n5dt8\") pod \"must-gather-jfmhs\" (UID: \"3dccd013-a61a-4838-bbea-745698e4cdf1\") " pod="openshift-must-gather-d6xvl/must-gather-jfmhs" Feb 16 22:16:10.383981 master-0 kubenswrapper[38936]: I0216 22:16:10.383913 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3dccd013-a61a-4838-bbea-745698e4cdf1-must-gather-output\") pod \"must-gather-jfmhs\" (UID: \"3dccd013-a61a-4838-bbea-745698e4cdf1\") " pod="openshift-must-gather-d6xvl/must-gather-jfmhs" Feb 16 22:16:10.388254 master-0 kubenswrapper[38936]: I0216 22:16:10.388203 38936 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d6xvl/must-gather-jsj77"] Feb 16 22:16:10.390505 master-0 kubenswrapper[38936]: I0216 22:16:10.390477 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d6xvl/must-gather-jsj77" Feb 16 22:16:10.420486 master-0 kubenswrapper[38936]: I0216 22:16:10.420437 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d6xvl/must-gather-jfmhs"] Feb 16 22:16:10.437354 master-0 kubenswrapper[38936]: I0216 22:16:10.437280 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d6xvl/must-gather-jsj77"] Feb 16 22:16:10.507995 master-0 kubenswrapper[38936]: I0216 22:16:10.507452 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5dt8\" (UniqueName: \"kubernetes.io/projected/3dccd013-a61a-4838-bbea-745698e4cdf1-kube-api-access-n5dt8\") pod \"must-gather-jfmhs\" (UID: \"3dccd013-a61a-4838-bbea-745698e4cdf1\") " pod="openshift-must-gather-d6xvl/must-gather-jfmhs" Feb 16 22:16:10.507995 master-0 kubenswrapper[38936]: I0216 22:16:10.507573 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3dccd013-a61a-4838-bbea-745698e4cdf1-must-gather-output\") pod \"must-gather-jfmhs\" (UID: \"3dccd013-a61a-4838-bbea-745698e4cdf1\") " pod="openshift-must-gather-d6xvl/must-gather-jfmhs" Feb 16 22:16:10.507995 master-0 kubenswrapper[38936]: I0216 22:16:10.507865 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/313d76c7-3bf2-4501-a43c-bbb80edaccdd-must-gather-output\") pod \"must-gather-jsj77\" (UID: \"313d76c7-3bf2-4501-a43c-bbb80edaccdd\") " pod="openshift-must-gather-d6xvl/must-gather-jsj77" Feb 16 22:16:10.508407 master-0 kubenswrapper[38936]: I0216 22:16:10.508089 38936 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rdq2\" (UniqueName: \"kubernetes.io/projected/313d76c7-3bf2-4501-a43c-bbb80edaccdd-kube-api-access-8rdq2\") pod \"must-gather-jsj77\" (UID: \"313d76c7-3bf2-4501-a43c-bbb80edaccdd\") " pod="openshift-must-gather-d6xvl/must-gather-jsj77" Feb 16 22:16:10.510733 master-0 kubenswrapper[38936]: I0216 22:16:10.509387 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3dccd013-a61a-4838-bbea-745698e4cdf1-must-gather-output\") pod \"must-gather-jfmhs\" (UID: \"3dccd013-a61a-4838-bbea-745698e4cdf1\") " pod="openshift-must-gather-d6xvl/must-gather-jfmhs" Feb 16 22:16:10.528992 master-0 kubenswrapper[38936]: I0216 22:16:10.528932 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5dt8\" (UniqueName: \"kubernetes.io/projected/3dccd013-a61a-4838-bbea-745698e4cdf1-kube-api-access-n5dt8\") pod \"must-gather-jfmhs\" (UID: \"3dccd013-a61a-4838-bbea-745698e4cdf1\") " pod="openshift-must-gather-d6xvl/must-gather-jfmhs" Feb 16 22:16:10.610731 master-0 kubenswrapper[38936]: I0216 22:16:10.610086 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/313d76c7-3bf2-4501-a43c-bbb80edaccdd-must-gather-output\") pod \"must-gather-jsj77\" (UID: \"313d76c7-3bf2-4501-a43c-bbb80edaccdd\") " pod="openshift-must-gather-d6xvl/must-gather-jsj77" Feb 16 22:16:10.610731 master-0 kubenswrapper[38936]: I0216 22:16:10.610190 38936 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rdq2\" (UniqueName: \"kubernetes.io/projected/313d76c7-3bf2-4501-a43c-bbb80edaccdd-kube-api-access-8rdq2\") pod \"must-gather-jsj77\" (UID: \"313d76c7-3bf2-4501-a43c-bbb80edaccdd\") " pod="openshift-must-gather-d6xvl/must-gather-jsj77" Feb 16 22:16:10.610731 master-0 kubenswrapper[38936]: I0216 22:16:10.610581 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/313d76c7-3bf2-4501-a43c-bbb80edaccdd-must-gather-output\") pod \"must-gather-jsj77\" (UID: \"313d76c7-3bf2-4501-a43c-bbb80edaccdd\") " pod="openshift-must-gather-d6xvl/must-gather-jsj77" Feb 16 22:16:10.626670 master-0 kubenswrapper[38936]: I0216 22:16:10.626596 38936 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rdq2\" (UniqueName: \"kubernetes.io/projected/313d76c7-3bf2-4501-a43c-bbb80edaccdd-kube-api-access-8rdq2\") pod \"must-gather-jsj77\" (UID: \"313d76c7-3bf2-4501-a43c-bbb80edaccdd\") " pod="openshift-must-gather-d6xvl/must-gather-jsj77" Feb 16 22:16:10.714446 master-0 kubenswrapper[38936]: I0216 22:16:10.714280 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d6xvl/must-gather-jfmhs" Feb 16 22:16:10.733547 master-0 kubenswrapper[38936]: I0216 22:16:10.733456 38936 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d6xvl/must-gather-jsj77" Feb 16 22:16:11.223052 master-0 kubenswrapper[38936]: I0216 22:16:11.222992 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d6xvl/must-gather-jfmhs"] Feb 16 22:16:11.228350 master-0 kubenswrapper[38936]: W0216 22:16:11.227399 38936 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dccd013_a61a_4838_bbea_745698e4cdf1.slice/crio-d865d048a0f897f9e8c458517c2c6b8fb63c2f655e9b2f0c35e8aa8fab58ad24 WatchSource:0}: Error finding container d865d048a0f897f9e8c458517c2c6b8fb63c2f655e9b2f0c35e8aa8fab58ad24: Status 404 returned error can't find the container with id d865d048a0f897f9e8c458517c2c6b8fb63c2f655e9b2f0c35e8aa8fab58ad24 Feb 16 22:16:11.232151 master-0 kubenswrapper[38936]: I0216 22:16:11.232107 38936 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 22:16:11.350632 master-0 kubenswrapper[38936]: I0216 22:16:11.350570 38936 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d6xvl/must-gather-jsj77"] Feb 16 22:16:11.607079 master-0 kubenswrapper[38936]: I0216 22:16:11.606942 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d6xvl/must-gather-jsj77" event={"ID":"313d76c7-3bf2-4501-a43c-bbb80edaccdd","Type":"ContainerStarted","Data":"3d97c4ecaa5854f87dae742e08b21421798344fbcaf16d25aa2506cc907e1546"} Feb 16 22:16:11.618550 master-0 kubenswrapper[38936]: I0216 22:16:11.618480 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d6xvl/must-gather-jfmhs" event={"ID":"3dccd013-a61a-4838-bbea-745698e4cdf1","Type":"ContainerStarted","Data":"d865d048a0f897f9e8c458517c2c6b8fb63c2f655e9b2f0c35e8aa8fab58ad24"} Feb 16 22:16:13.570346 master-0 kubenswrapper[38936]: I0216 22:16:13.570285 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-649c4f5445-n994s_a5d4ac48-aed3-46b9-9b2a-d741121e05b4/cluster-version-operator/0.log" Feb 16 22:16:13.646575 master-0 kubenswrapper[38936]: I0216 22:16:13.646502 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d6xvl/must-gather-jsj77" event={"ID":"313d76c7-3bf2-4501-a43c-bbb80edaccdd","Type":"ContainerStarted","Data":"e623eadf0eef7493dfe4296b3877b28e026bc338c52638b3b66777d598ca2890"} Feb 16 22:16:13.646575 master-0 kubenswrapper[38936]: I0216 22:16:13.646565 38936 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d6xvl/must-gather-jsj77" event={"ID":"313d76c7-3bf2-4501-a43c-bbb80edaccdd","Type":"ContainerStarted","Data":"bf66e0470253df47a19268d65a9f71eee72693a8c243c38ce3c336b2cd727e9a"} Feb 16 22:16:13.690870 master-0 kubenswrapper[38936]: I0216 22:16:13.690766 38936 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-d6xvl/must-gather-jsj77" podStartSLOduration=2.602897152 podStartE2EDuration="3.690741241s" podCreationTimestamp="2026-02-16 22:16:10 +0000 UTC" firstStartedPulling="2026-02-16 22:16:11.325845427 +0000 UTC m=+3201.677848789" lastFinishedPulling="2026-02-16 22:16:12.413689516 +0000 UTC m=+3202.765692878" observedRunningTime="2026-02-16 22:16:13.663602601 +0000 UTC m=+3204.015605963" watchObservedRunningTime="2026-02-16 22:16:13.690741241 +0000 UTC m=+3204.042744603" Feb 16 22:16:15.884293 master-0 kubenswrapper[38936]: I0216 22:16:15.884209 38936 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-649c4f5445-n994s_a5d4ac48-aed3-46b9-9b2a-d741121e05b4/cluster-version-operator/1.log"